00:00:00.000 Started by upstream project "autotest-per-patch" build number 132777 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.045 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.046 The recommended git tool is: git 00:00:00.046 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.067 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.112 Using shallow fetch with depth 1 00:00:00.112 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.112 > git --version # timeout=10 00:00:00.158 > git --version # 'git version 2.39.2' 00:00:00.158 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.388 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.402 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.416 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.416 > git config core.sparsecheckout # timeout=10 00:00:03.427 > git read-tree -mu HEAD # timeout=10 00:00:03.441 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.462 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.462 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.577 [Pipeline] Start of Pipeline 00:00:03.590 [Pipeline] library 00:00:03.592 Loading library shm_lib@master 00:00:03.592 Library shm_lib@master is cached. Copying from home. 00:00:03.623 [Pipeline] node 00:33:23.535 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:33:23.538 [Pipeline] { 00:33:23.550 [Pipeline] catchError 00:33:23.552 [Pipeline] { 00:33:23.566 [Pipeline] wrap 00:33:23.577 [Pipeline] { 00:33:23.586 [Pipeline] stage 00:33:23.589 [Pipeline] { (Prologue) 00:33:23.610 [Pipeline] echo 00:33:23.612 Node: VM-host-SM9 00:33:23.620 [Pipeline] cleanWs 00:33:23.631 [WS-CLEANUP] Deleting project workspace... 00:33:23.631 [WS-CLEANUP] Deferred wipeout is used... 00:33:23.637 [WS-CLEANUP] done 00:33:23.844 [Pipeline] setCustomBuildProperty 00:33:23.945 [Pipeline] httpRequest 00:33:24.352 [Pipeline] echo 00:33:24.354 Sorcerer 10.211.164.101 is alive 00:33:24.365 [Pipeline] retry 00:33:24.368 [Pipeline] { 00:33:24.383 [Pipeline] httpRequest 00:33:24.387 HttpMethod: GET 00:33:24.388 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.388 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.390 Response Code: HTTP/1.1 200 OK 00:33:24.390 Success: Status code 200 is in the accepted range: 200,404 00:33:24.391 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.536 [Pipeline] } 00:33:24.553 [Pipeline] // retry 00:33:24.560 [Pipeline] sh 00:33:24.859 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.875 [Pipeline] httpRequest 00:33:25.201 [Pipeline] echo 00:33:25.203 Sorcerer 10.211.164.101 is alive 00:33:25.213 [Pipeline] retry 00:33:25.216 [Pipeline] { 00:33:25.232 [Pipeline] httpRequest 00:33:25.237 HttpMethod: GET 00:33:25.237 URL: http://10.211.164.101/packages/spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:33:25.238 Sending request to url: http://10.211.164.101/packages/spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:33:25.239 Response Code: HTTP/1.1 200 OK 00:33:25.239 Success: Status code 200 is in the accepted range: 200,404 00:33:25.240 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:33:27.512 [Pipeline] } 00:33:27.528 [Pipeline] // retry 00:33:27.535 [Pipeline] sh 00:33:27.813 + tar --no-same-owner -xf spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:33:31.106 [Pipeline] sh 00:33:31.384 + git -C spdk log --oneline -n5 00:33:31.384 afe42438a env: use 4-KiB memory mapping in no-huge mode 00:33:31.384 cabd61f7f env: extend the page table to support 4-KiB mapping 00:33:31.384 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 00:33:31.384 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 00:33:31.384 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:33:31.402 [Pipeline] writeFile 00:33:31.415 [Pipeline] sh 00:33:31.695 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:33:31.706 [Pipeline] sh 00:33:31.986 + cat autorun-spdk.conf 00:33:31.986 SPDK_RUN_FUNCTIONAL_TEST=1 00:33:31.986 SPDK_TEST_NVMF=1 00:33:31.986 SPDK_TEST_NVMF_TRANSPORT=tcp 00:33:31.986 SPDK_TEST_URING=1 00:33:31.986 SPDK_TEST_USDT=1 00:33:31.986 SPDK_RUN_UBSAN=1 00:33:31.986 NET_TYPE=virt 00:33:31.986 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:33:31.993 RUN_NIGHTLY=0 00:33:31.994 [Pipeline] } 00:33:32.007 [Pipeline] // stage 00:33:32.022 [Pipeline] stage 00:33:32.024 [Pipeline] { (Run VM) 00:33:32.033 [Pipeline] sh 00:33:32.308 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:33:32.308 + echo 'Start stage prepare_nvme.sh' 00:33:32.308 Start stage prepare_nvme.sh 00:33:32.308 + [[ -n 3 ]] 00:33:32.308 + disk_prefix=ex3 00:33:32.308 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:33:32.308 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:33:32.308 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:33:32.308 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:33:32.308 ++ SPDK_TEST_NVMF=1 00:33:32.308 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:33:32.308 ++ SPDK_TEST_URING=1 00:33:32.308 ++ SPDK_TEST_USDT=1 00:33:32.308 ++ SPDK_RUN_UBSAN=1 00:33:32.308 ++ NET_TYPE=virt 00:33:32.308 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:33:32.308 ++ RUN_NIGHTLY=0 00:33:32.308 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:33:32.308 + nvme_files=() 00:33:32.308 + declare -A nvme_files 00:33:32.308 + backend_dir=/var/lib/libvirt/images/backends 00:33:32.308 + nvme_files['nvme.img']=5G 00:33:32.308 + nvme_files['nvme-cmb.img']=5G 00:33:32.308 + nvme_files['nvme-multi0.img']=4G 00:33:32.308 + nvme_files['nvme-multi1.img']=4G 00:33:32.308 + nvme_files['nvme-multi2.img']=4G 00:33:32.308 + nvme_files['nvme-openstack.img']=8G 00:33:32.308 + nvme_files['nvme-zns.img']=5G 00:33:32.308 + (( SPDK_TEST_NVME_PMR == 1 )) 00:33:32.308 + (( SPDK_TEST_FTL == 1 )) 00:33:32.308 + (( SPDK_TEST_NVME_FDP == 1 )) 00:33:32.308 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:33:32.308 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:33:32.308 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:33:32.308 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:33:32.308 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:33:32.308 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:33:32.308 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:33:32.308 + for nvme in "${!nvme_files[@]}" 00:33:32.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:33:32.566 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:33:32.566 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:33:32.566 + echo 'End stage prepare_nvme.sh' 00:33:32.566 End stage prepare_nvme.sh 00:33:32.577 [Pipeline] sh 00:33:32.857 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:33:32.857 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:33:32.857 00:33:32.857 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:33:32.857 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:33:32.857 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:33:32.857 HELP=0 00:33:32.857 DRY_RUN=0 00:33:32.857 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:33:32.857 NVME_DISKS_TYPE=nvme,nvme, 00:33:32.857 NVME_AUTO_CREATE=0 00:33:32.857 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:33:32.857 NVME_CMB=,, 00:33:32.857 NVME_PMR=,, 00:33:32.857 NVME_ZNS=,, 00:33:32.857 NVME_MS=,, 00:33:32.857 NVME_FDP=,, 00:33:32.857 SPDK_VAGRANT_DISTRO=fedora39 00:33:32.857 SPDK_VAGRANT_VMCPU=10 00:33:32.857 SPDK_VAGRANT_VMRAM=12288 00:33:32.857 SPDK_VAGRANT_PROVIDER=libvirt 00:33:32.857 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:33:32.857 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:33:32.857 SPDK_OPENSTACK_NETWORK=0 00:33:32.857 VAGRANT_PACKAGE_BOX=0 00:33:32.857 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:33:32.857 FORCE_DISTRO=true 00:33:32.857 VAGRANT_BOX_VERSION= 00:33:32.857 EXTRA_VAGRANTFILES= 00:33:32.857 NIC_MODEL=e1000 00:33:32.857 00:33:32.857 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:33:32.857 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:33:36.140 Bringing machine 'default' up with 'libvirt' provider... 00:33:36.722 ==> default: Creating image (snapshot of base box volume). 00:33:36.722 ==> default: Creating domain with the following settings... 00:33:36.722 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733721863_79eeacc2d96bc72f29d9 00:33:36.722 ==> default: -- Domain type: kvm 00:33:36.722 ==> default: -- Cpus: 10 00:33:36.722 ==> default: -- Feature: acpi 00:33:36.722 ==> default: -- Feature: apic 00:33:36.722 ==> default: -- Feature: pae 00:33:36.722 ==> default: -- Memory: 12288M 00:33:36.722 ==> default: -- Memory Backing: hugepages: 00:33:36.722 ==> default: -- Management MAC: 00:33:36.722 ==> default: -- Loader: 00:33:36.722 ==> default: -- Nvram: 00:33:36.722 ==> default: -- Base box: spdk/fedora39 00:33:36.722 ==> default: -- Storage pool: default 00:33:36.722 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733721863_79eeacc2d96bc72f29d9.img (20G) 00:33:36.722 ==> default: -- Volume Cache: default 00:33:36.722 ==> default: -- Kernel: 00:33:36.722 ==> default: -- Initrd: 00:33:36.722 ==> default: -- Graphics Type: vnc 00:33:36.722 ==> default: -- Graphics Port: -1 00:33:36.722 ==> default: -- Graphics IP: 127.0.0.1 00:33:36.722 ==> default: -- Graphics Password: Not defined 00:33:36.722 ==> default: -- Video Type: cirrus 00:33:36.722 ==> default: -- Video VRAM: 9216 00:33:36.722 ==> default: -- Sound Type: 00:33:36.722 ==> default: -- Keymap: en-us 00:33:36.722 ==> default: -- TPM Path: 00:33:36.722 ==> default: -- INPUT: type=mouse, bus=ps2 00:33:36.722 ==> default: -- Command line args: 00:33:36.722 ==> default: -> value=-device, 00:33:36.722 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:33:36.722 ==> default: -> value=-drive, 00:33:36.722 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:33:36.722 ==> default: -> value=-device, 00:33:36.722 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:33:36.722 ==> default: -> value=-device, 00:33:36.722 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:33:36.722 ==> default: -> value=-drive, 00:33:36.722 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:33:36.722 ==> default: -> value=-device, 00:33:36.722 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:33:36.722 ==> default: -> value=-drive, 00:33:36.722 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:33:36.722 ==> default: -> value=-device, 00:33:36.722 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:33:36.722 ==> default: -> value=-drive, 00:33:36.722 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:33:36.722 ==> default: -> value=-device, 00:33:36.722 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:33:36.722 ==> default: Creating shared folders metadata... 00:33:36.722 ==> default: Starting domain. 00:33:38.097 ==> default: Waiting for domain to get an IP address... 00:33:56.230 ==> default: Waiting for SSH to become available... 00:33:56.230 ==> default: Configuring and enabling network interfaces... 00:33:58.762 default: SSH address: 192.168.121.198:22 00:33:58.762 default: SSH username: vagrant 00:33:58.762 default: SSH auth method: private key 00:34:00.658 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:34:08.764 ==> default: Mounting SSHFS shared folder... 00:34:10.142 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:34:10.142 ==> default: Checking Mount.. 00:34:11.083 ==> default: Folder Successfully Mounted! 00:34:11.083 ==> default: Running provisioner: file... 00:34:12.102 default: ~/.gitconfig => .gitconfig 00:34:12.361 00:34:12.361 SUCCESS! 00:34:12.361 00:34:12.361 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:34:12.361 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:34:12.361 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:34:12.361 00:34:12.370 [Pipeline] } 00:34:12.383 [Pipeline] // stage 00:34:12.391 [Pipeline] dir 00:34:12.392 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:34:12.393 [Pipeline] { 00:34:12.405 [Pipeline] catchError 00:34:12.406 [Pipeline] { 00:34:12.418 [Pipeline] sh 00:34:12.696 + vagrant ssh-config --host vagrant 00:34:12.697 + sed -ne /^Host/,$p 00:34:12.697 + tee ssh_conf 00:34:15.983 Host vagrant 00:34:15.983 HostName 192.168.121.198 00:34:15.983 User vagrant 00:34:15.983 Port 22 00:34:15.983 UserKnownHostsFile /dev/null 00:34:15.983 StrictHostKeyChecking no 00:34:15.983 PasswordAuthentication no 00:34:15.983 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:34:15.983 IdentitiesOnly yes 00:34:15.983 LogLevel FATAL 00:34:15.983 ForwardAgent yes 00:34:15.983 ForwardX11 yes 00:34:15.983 00:34:15.994 [Pipeline] withEnv 00:34:15.996 [Pipeline] { 00:34:16.008 [Pipeline] sh 00:34:16.283 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:34:16.283 source /etc/os-release 00:34:16.283 [[ -e /image.version ]] && img=$(< /image.version) 00:34:16.283 # Minimal, systemd-like check. 00:34:16.283 if [[ -e /.dockerenv ]]; then 00:34:16.283 # Clear garbage from the node's name: 00:34:16.283 # agt-er_autotest_547-896 -> autotest_547-896 00:34:16.283 # $HOSTNAME is the actual container id 00:34:16.283 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:34:16.283 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:34:16.283 # We can assume this is a mount from a host where container is running, 00:34:16.283 # so fetch its hostname to easily identify the target swarm worker. 00:34:16.284 container="$(< /etc/hostname) ($agent)" 00:34:16.284 else 00:34:16.284 # Fallback 00:34:16.284 container=$agent 00:34:16.284 fi 00:34:16.284 fi 00:34:16.284 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:34:16.284 00:34:16.552 [Pipeline] } 00:34:16.563 [Pipeline] // withEnv 00:34:16.569 [Pipeline] setCustomBuildProperty 00:34:16.582 [Pipeline] stage 00:34:16.584 [Pipeline] { (Tests) 00:34:16.598 [Pipeline] sh 00:34:16.876 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:34:17.146 [Pipeline] sh 00:34:17.424 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:34:17.728 [Pipeline] timeout 00:34:17.728 Timeout set to expire in 1 hr 0 min 00:34:17.730 [Pipeline] { 00:34:17.741 [Pipeline] sh 00:34:18.018 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:34:18.592 HEAD is now at afe42438a env: use 4-KiB memory mapping in no-huge mode 00:34:18.603 [Pipeline] sh 00:34:18.882 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:34:19.151 [Pipeline] sh 00:34:19.427 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:34:19.730 [Pipeline] sh 00:34:20.010 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:34:20.269 ++ readlink -f spdk_repo 00:34:20.269 + DIR_ROOT=/home/vagrant/spdk_repo 00:34:20.269 + [[ -n /home/vagrant/spdk_repo ]] 00:34:20.269 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:34:20.269 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:34:20.269 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:34:20.269 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:34:20.269 + [[ -d /home/vagrant/spdk_repo/output ]] 00:34:20.269 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:34:20.269 + cd /home/vagrant/spdk_repo 00:34:20.269 + source /etc/os-release 00:34:20.269 ++ NAME='Fedora Linux' 00:34:20.269 ++ VERSION='39 (Cloud Edition)' 00:34:20.269 ++ ID=fedora 00:34:20.269 ++ VERSION_ID=39 00:34:20.269 ++ VERSION_CODENAME= 00:34:20.269 ++ PLATFORM_ID=platform:f39 00:34:20.269 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:34:20.269 ++ ANSI_COLOR='0;38;2;60;110;180' 00:34:20.269 ++ LOGO=fedora-logo-icon 00:34:20.269 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:34:20.269 ++ HOME_URL=https://fedoraproject.org/ 00:34:20.269 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:34:20.269 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:34:20.269 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:34:20.269 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:34:20.269 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:34:20.269 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:34:20.269 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:34:20.269 ++ SUPPORT_END=2024-11-12 00:34:20.269 ++ VARIANT='Cloud Edition' 00:34:20.269 ++ VARIANT_ID=cloud 00:34:20.269 + uname -a 00:34:20.269 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:34:20.269 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:34:20.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:20.787 Hugepages 00:34:20.787 node hugesize free / total 00:34:20.787 node0 1048576kB 0 / 0 00:34:20.787 node0 2048kB 0 / 0 00:34:20.787 00:34:20.787 Type BDF Vendor Device NUMA Driver Device Block devices 00:34:20.787 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:34:20.787 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:34:20.787 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:34:20.787 + rm -f /tmp/spdk-ld-path 00:34:20.787 + source autorun-spdk.conf 00:34:20.787 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:34:20.787 ++ SPDK_TEST_NVMF=1 00:34:20.787 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:34:20.787 ++ SPDK_TEST_URING=1 00:34:20.787 ++ SPDK_TEST_USDT=1 00:34:20.787 ++ SPDK_RUN_UBSAN=1 00:34:20.787 ++ NET_TYPE=virt 00:34:20.787 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:34:20.787 ++ RUN_NIGHTLY=0 00:34:20.787 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:34:20.787 + [[ -n '' ]] 00:34:20.787 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:34:20.787 + for M in /var/spdk/build-*-manifest.txt 00:34:20.787 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:34:20.787 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:34:20.787 + for M in /var/spdk/build-*-manifest.txt 00:34:20.787 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:34:20.787 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:34:20.787 + for M in /var/spdk/build-*-manifest.txt 00:34:20.787 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:34:20.787 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:34:20.787 ++ uname 00:34:20.787 + [[ Linux == \L\i\n\u\x ]] 00:34:20.787 + sudo dmesg -T 00:34:20.787 + sudo dmesg --clear 00:34:20.787 + dmesg_pid=5259 00:34:20.787 + [[ Fedora Linux == FreeBSD ]] 00:34:20.787 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:20.787 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:20.787 + sudo dmesg -Tw 00:34:20.787 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:34:20.787 + [[ -x /usr/src/fio-static/fio ]] 00:34:20.787 + export FIO_BIN=/usr/src/fio-static/fio 00:34:20.787 + FIO_BIN=/usr/src/fio-static/fio 00:34:20.787 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:34:20.787 + [[ ! -v VFIO_QEMU_BIN ]] 00:34:20.787 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:34:20.787 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:20.787 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:20.787 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:34:20.787 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:20.787 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:20.787 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:34:21.046 05:25:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:34:21.046 05:25:07 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:34:21.046 05:25:07 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:34:21.047 05:25:07 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:34:21.047 05:25:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:34:21.047 05:25:07 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:34:21.047 05:25:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:34:21.047 05:25:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:21.047 05:25:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:21.047 05:25:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:21.047 05:25:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.047 05:25:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.047 05:25:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.047 05:25:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.047 05:25:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.047 05:25:07 -- paths/export.sh@5 -- $ export PATH 00:34:21.047 05:25:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.047 05:25:07 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:21.047 05:25:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:34:21.047 05:25:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733721907.XXXXXX 00:34:21.047 05:25:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733721907.0jjfdU 00:34:21.047 05:25:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:34:21.047 05:25:07 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:34:21.047 05:25:07 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:21.047 05:25:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:21.047 05:25:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:21.047 05:25:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:34:21.047 05:25:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:34:21.047 05:25:07 -- common/autotest_common.sh@10 -- $ set +x 00:34:21.047 05:25:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:34:21.047 05:25:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:34:21.047 05:25:07 -- pm/common@17 -- $ local monitor 00:34:21.047 05:25:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:21.047 05:25:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:21.047 05:25:07 -- pm/common@25 -- $ sleep 1 00:34:21.047 05:25:07 -- pm/common@21 -- $ date +%s 00:34:21.047 05:25:07 -- pm/common@21 -- $ date +%s 00:34:21.047 05:25:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721907 00:34:21.047 05:25:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721907 00:34:21.047 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721907_collect-cpu-load.pm.log 00:34:21.047 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721907_collect-vmstat.pm.log 00:34:21.983 05:25:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:34:21.983 05:25:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:34:21.983 05:25:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:34:21.983 05:25:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:21.983 05:25:08 -- spdk/autobuild.sh@16 -- $ date -u 00:34:21.983 Mon Dec 9 05:25:08 AM UTC 2024 00:34:21.983 05:25:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:34:21.983 v25.01-pre-280-gafe42438a 00:34:21.983 05:25:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:34:21.983 05:25:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:34:21.983 05:25:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:34:21.983 05:25:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:34:21.983 05:25:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:34:21.983 05:25:08 -- common/autotest_common.sh@10 -- $ set +x 00:34:21.983 ************************************ 00:34:21.983 START TEST ubsan 00:34:21.983 ************************************ 00:34:21.983 using ubsan 00:34:21.983 05:25:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:34:21.983 00:34:21.983 real 0m0.000s 00:34:21.983 user 0m0.000s 00:34:21.983 sys 0m0.000s 00:34:21.983 05:25:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:34:21.983 ************************************ 00:34:21.983 END TEST ubsan 00:34:21.983 ************************************ 00:34:21.983 05:25:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:34:22.242 05:25:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:34:22.242 05:25:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:34:22.242 05:25:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:34:22.242 05:25:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:34:22.242 05:25:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:34:22.242 05:25:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:34:22.242 05:25:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:34:22.242 05:25:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:34:22.242 05:25:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:34:22.242 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:22.242 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:34:22.809 Using 'verbs' RDMA provider 00:34:35.949 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:34:50.826 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:34:50.826 Creating mk/config.mk...done. 00:34:50.826 Creating mk/cc.flags.mk...done. 00:34:50.826 Type 'make' to build. 00:34:50.826 05:25:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:34:50.826 05:25:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:34:50.826 05:25:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:34:50.826 05:25:36 -- common/autotest_common.sh@10 -- $ set +x 00:34:50.826 ************************************ 00:34:50.826 START TEST make 00:34:50.826 ************************************ 00:34:50.826 05:25:36 make -- common/autotest_common.sh@1129 -- $ make -j10 00:34:50.826 make[1]: Nothing to be done for 'all'. 00:35:03.034 The Meson build system 00:35:03.034 Version: 1.5.0 00:35:03.034 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:35:03.034 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:35:03.034 Build type: native build 00:35:03.034 Program cat found: YES (/usr/bin/cat) 00:35:03.034 Project name: DPDK 00:35:03.034 Project version: 24.03.0 00:35:03.034 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:35:03.034 C linker for the host machine: cc ld.bfd 2.40-14 00:35:03.034 Host machine cpu family: x86_64 00:35:03.034 Host machine cpu: x86_64 00:35:03.034 Message: ## Building in Developer Mode ## 00:35:03.034 Program pkg-config found: YES (/usr/bin/pkg-config) 00:35:03.034 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:35:03.034 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:35:03.034 Program python3 found: YES (/usr/bin/python3) 00:35:03.034 Program cat found: YES (/usr/bin/cat) 00:35:03.034 Compiler for C supports arguments -march=native: YES 00:35:03.034 Checking for size of "void *" : 8 00:35:03.034 Checking for size of "void *" : 8 (cached) 00:35:03.034 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:35:03.034 Library m found: YES 00:35:03.034 Library numa found: YES 00:35:03.034 Has header "numaif.h" : YES 00:35:03.034 Library fdt found: NO 00:35:03.034 Library execinfo found: NO 00:35:03.034 Has header "execinfo.h" : YES 00:35:03.034 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:35:03.034 Run-time dependency libarchive found: NO (tried pkgconfig) 00:35:03.034 Run-time dependency libbsd found: NO (tried pkgconfig) 00:35:03.034 Run-time dependency jansson found: NO (tried pkgconfig) 00:35:03.034 Run-time dependency openssl found: YES 3.1.1 00:35:03.034 Run-time dependency libpcap found: YES 1.10.4 00:35:03.034 Has header "pcap.h" with dependency libpcap: YES 00:35:03.034 Compiler for C supports arguments -Wcast-qual: YES 00:35:03.034 Compiler for C supports arguments -Wdeprecated: YES 00:35:03.034 Compiler for C supports arguments -Wformat: YES 00:35:03.034 Compiler for C supports arguments -Wformat-nonliteral: NO 00:35:03.034 Compiler for C supports arguments -Wformat-security: NO 00:35:03.034 Compiler for C supports arguments -Wmissing-declarations: YES 00:35:03.034 Compiler for C supports arguments -Wmissing-prototypes: YES 00:35:03.034 Compiler for C supports arguments -Wnested-externs: YES 00:35:03.034 Compiler for C supports arguments -Wold-style-definition: YES 00:35:03.034 Compiler for C supports arguments -Wpointer-arith: YES 00:35:03.034 Compiler for C supports arguments -Wsign-compare: YES 00:35:03.034 Compiler for C supports arguments -Wstrict-prototypes: YES 00:35:03.034 Compiler for C supports arguments -Wundef: YES 00:35:03.034 Compiler for C supports arguments -Wwrite-strings: YES 00:35:03.034 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:35:03.034 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:35:03.034 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:35:03.034 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:35:03.034 Program objdump found: YES (/usr/bin/objdump) 00:35:03.034 Compiler for C supports arguments -mavx512f: YES 00:35:03.034 Checking if "AVX512 checking" compiles: YES 00:35:03.034 Fetching value of define "__SSE4_2__" : 1 00:35:03.034 Fetching value of define "__AES__" : 1 00:35:03.034 Fetching value of define "__AVX__" : 1 00:35:03.034 Fetching value of define "__AVX2__" : 1 00:35:03.034 Fetching value of define "__AVX512BW__" : (undefined) 00:35:03.034 Fetching value of define "__AVX512CD__" : (undefined) 00:35:03.034 Fetching value of define "__AVX512DQ__" : (undefined) 00:35:03.034 Fetching value of define "__AVX512F__" : (undefined) 00:35:03.035 Fetching value of define "__AVX512VL__" : (undefined) 00:35:03.035 Fetching value of define "__PCLMUL__" : 1 00:35:03.035 Fetching value of define "__RDRND__" : 1 00:35:03.035 Fetching value of define "__RDSEED__" : 1 00:35:03.035 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:35:03.035 Fetching value of define "__znver1__" : (undefined) 00:35:03.035 Fetching value of define "__znver2__" : (undefined) 00:35:03.035 Fetching value of define "__znver3__" : (undefined) 00:35:03.035 Fetching value of define "__znver4__" : (undefined) 00:35:03.035 Compiler for C supports arguments -Wno-format-truncation: YES 00:35:03.035 Message: lib/log: Defining dependency "log" 00:35:03.035 Message: lib/kvargs: Defining dependency "kvargs" 00:35:03.035 Message: lib/telemetry: Defining dependency "telemetry" 00:35:03.035 Checking for function "getentropy" : NO 00:35:03.035 Message: lib/eal: Defining dependency "eal" 00:35:03.035 Message: lib/ring: Defining dependency "ring" 00:35:03.035 Message: lib/rcu: Defining dependency "rcu" 00:35:03.035 Message: lib/mempool: Defining dependency "mempool" 00:35:03.035 Message: lib/mbuf: Defining dependency "mbuf" 00:35:03.035 Fetching value of define "__PCLMUL__" : 1 (cached) 00:35:03.035 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:35:03.035 Compiler for C supports arguments -mpclmul: YES 00:35:03.035 Compiler for C supports arguments -maes: YES 00:35:03.035 Compiler for C supports arguments -mavx512f: YES (cached) 00:35:03.035 Compiler for C supports arguments -mavx512bw: YES 00:35:03.035 Compiler for C supports arguments -mavx512dq: YES 00:35:03.035 Compiler for C supports arguments -mavx512vl: YES 00:35:03.035 Compiler for C supports arguments -mvpclmulqdq: YES 00:35:03.035 Compiler for C supports arguments -mavx2: YES 00:35:03.035 Compiler for C supports arguments -mavx: YES 00:35:03.035 Message: lib/net: Defining dependency "net" 00:35:03.035 Message: lib/meter: Defining dependency "meter" 00:35:03.035 Message: lib/ethdev: Defining dependency "ethdev" 00:35:03.035 Message: lib/pci: Defining dependency "pci" 00:35:03.035 Message: lib/cmdline: Defining dependency "cmdline" 00:35:03.035 Message: lib/hash: Defining dependency "hash" 00:35:03.035 Message: lib/timer: Defining dependency "timer" 00:35:03.035 Message: lib/compressdev: Defining dependency "compressdev" 00:35:03.035 Message: lib/cryptodev: Defining dependency "cryptodev" 00:35:03.035 Message: lib/dmadev: Defining dependency "dmadev" 00:35:03.035 Compiler for C supports arguments -Wno-cast-qual: YES 00:35:03.035 Message: lib/power: Defining dependency "power" 00:35:03.035 Message: lib/reorder: Defining dependency "reorder" 00:35:03.035 Message: lib/security: Defining dependency "security" 00:35:03.035 Has header "linux/userfaultfd.h" : YES 00:35:03.035 Has header "linux/vduse.h" : YES 00:35:03.035 Message: lib/vhost: Defining dependency "vhost" 00:35:03.035 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:35:03.035 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:35:03.035 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:35:03.035 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:35:03.035 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:35:03.035 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:35:03.035 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:35:03.035 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:35:03.035 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:35:03.035 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:35:03.035 Program doxygen found: YES (/usr/local/bin/doxygen) 00:35:03.035 Configuring doxy-api-html.conf using configuration 00:35:03.035 Configuring doxy-api-man.conf using configuration 00:35:03.035 Program mandb found: YES (/usr/bin/mandb) 00:35:03.035 Program sphinx-build found: NO 00:35:03.035 Configuring rte_build_config.h using configuration 00:35:03.035 Message: 00:35:03.035 ================= 00:35:03.035 Applications Enabled 00:35:03.035 ================= 00:35:03.035 00:35:03.035 apps: 00:35:03.035 00:35:03.035 00:35:03.035 Message: 00:35:03.035 ================= 00:35:03.035 Libraries Enabled 00:35:03.035 ================= 00:35:03.035 00:35:03.035 libs: 00:35:03.035 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:35:03.035 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:35:03.035 cryptodev, dmadev, power, reorder, security, vhost, 00:35:03.035 00:35:03.035 Message: 00:35:03.035 =============== 00:35:03.035 Drivers Enabled 00:35:03.035 =============== 00:35:03.035 00:35:03.035 common: 00:35:03.035 00:35:03.035 bus: 00:35:03.035 pci, vdev, 00:35:03.035 mempool: 00:35:03.035 ring, 00:35:03.035 dma: 00:35:03.035 00:35:03.035 net: 00:35:03.035 00:35:03.035 crypto: 00:35:03.035 00:35:03.035 compress: 00:35:03.035 00:35:03.035 vdpa: 00:35:03.035 00:35:03.035 00:35:03.035 Message: 00:35:03.035 ================= 00:35:03.035 Content Skipped 00:35:03.035 ================= 00:35:03.035 00:35:03.035 apps: 00:35:03.035 dumpcap: explicitly disabled via build config 00:35:03.035 graph: explicitly disabled via build config 00:35:03.035 pdump: explicitly disabled via build config 00:35:03.035 proc-info: explicitly disabled via build config 00:35:03.035 test-acl: explicitly disabled via build config 00:35:03.035 test-bbdev: explicitly disabled via build config 00:35:03.035 test-cmdline: explicitly disabled via build config 00:35:03.035 test-compress-perf: explicitly disabled via build config 00:35:03.035 test-crypto-perf: explicitly disabled via build config 00:35:03.035 test-dma-perf: explicitly disabled via build config 00:35:03.035 test-eventdev: explicitly disabled via build config 00:35:03.035 test-fib: explicitly disabled via build config 00:35:03.035 test-flow-perf: explicitly disabled via build config 00:35:03.035 test-gpudev: explicitly disabled via build config 00:35:03.035 test-mldev: explicitly disabled via build config 00:35:03.035 test-pipeline: explicitly disabled via build config 00:35:03.035 test-pmd: explicitly disabled via build config 00:35:03.035 test-regex: explicitly disabled via build config 00:35:03.035 test-sad: explicitly disabled via build config 00:35:03.035 test-security-perf: explicitly disabled via build config 00:35:03.035 00:35:03.035 libs: 00:35:03.035 argparse: explicitly disabled via build config 00:35:03.035 metrics: explicitly disabled via build config 00:35:03.035 acl: explicitly disabled via build config 00:35:03.035 bbdev: explicitly disabled via build config 00:35:03.035 bitratestats: explicitly disabled via build config 00:35:03.035 bpf: explicitly disabled via build config 00:35:03.035 cfgfile: explicitly disabled via build config 00:35:03.035 distributor: explicitly disabled via build config 00:35:03.035 efd: explicitly disabled via build config 00:35:03.035 eventdev: explicitly disabled via build config 00:35:03.035 dispatcher: explicitly disabled via build config 00:35:03.035 gpudev: explicitly disabled via build config 00:35:03.035 gro: explicitly disabled via build config 00:35:03.035 gso: explicitly disabled via build config 00:35:03.035 ip_frag: explicitly disabled via build config 00:35:03.035 jobstats: explicitly disabled via build config 00:35:03.035 latencystats: explicitly disabled via build config 00:35:03.035 lpm: explicitly disabled via build config 00:35:03.035 member: explicitly disabled via build config 00:35:03.035 pcapng: explicitly disabled via build config 00:35:03.035 rawdev: explicitly disabled via build config 00:35:03.035 regexdev: explicitly disabled via build config 00:35:03.035 mldev: explicitly disabled via build config 00:35:03.035 rib: explicitly disabled via build config 00:35:03.035 sched: explicitly disabled via build config 00:35:03.035 stack: explicitly disabled via build config 00:35:03.035 ipsec: explicitly disabled via build config 00:35:03.035 pdcp: explicitly disabled via build config 00:35:03.035 fib: explicitly disabled via build config 00:35:03.035 port: explicitly disabled via build config 00:35:03.035 pdump: explicitly disabled via build config 00:35:03.035 table: explicitly disabled via build config 00:35:03.035 pipeline: explicitly disabled via build config 00:35:03.035 graph: explicitly disabled via build config 00:35:03.035 node: explicitly disabled via build config 00:35:03.035 00:35:03.035 drivers: 00:35:03.035 common/cpt: not in enabled drivers build config 00:35:03.035 common/dpaax: not in enabled drivers build config 00:35:03.035 common/iavf: not in enabled drivers build config 00:35:03.035 common/idpf: not in enabled drivers build config 00:35:03.035 common/ionic: not in enabled drivers build config 00:35:03.035 common/mvep: not in enabled drivers build config 00:35:03.035 common/octeontx: not in enabled drivers build config 00:35:03.035 bus/auxiliary: not in enabled drivers build config 00:35:03.035 bus/cdx: not in enabled drivers build config 00:35:03.035 bus/dpaa: not in enabled drivers build config 00:35:03.035 bus/fslmc: not in enabled drivers build config 00:35:03.035 bus/ifpga: not in enabled drivers build config 00:35:03.035 bus/platform: not in enabled drivers build config 00:35:03.035 bus/uacce: not in enabled drivers build config 00:35:03.035 bus/vmbus: not in enabled drivers build config 00:35:03.035 common/cnxk: not in enabled drivers build config 00:35:03.035 common/mlx5: not in enabled drivers build config 00:35:03.035 common/nfp: not in enabled drivers build config 00:35:03.035 common/nitrox: not in enabled drivers build config 00:35:03.035 common/qat: not in enabled drivers build config 00:35:03.035 common/sfc_efx: not in enabled drivers build config 00:35:03.035 mempool/bucket: not in enabled drivers build config 00:35:03.035 mempool/cnxk: not in enabled drivers build config 00:35:03.035 mempool/dpaa: not in enabled drivers build config 00:35:03.035 mempool/dpaa2: not in enabled drivers build config 00:35:03.035 mempool/octeontx: not in enabled drivers build config 00:35:03.035 mempool/stack: not in enabled drivers build config 00:35:03.035 dma/cnxk: not in enabled drivers build config 00:35:03.035 dma/dpaa: not in enabled drivers build config 00:35:03.035 dma/dpaa2: not in enabled drivers build config 00:35:03.035 dma/hisilicon: not in enabled drivers build config 00:35:03.035 dma/idxd: not in enabled drivers build config 00:35:03.035 dma/ioat: not in enabled drivers build config 00:35:03.036 dma/skeleton: not in enabled drivers build config 00:35:03.036 net/af_packet: not in enabled drivers build config 00:35:03.036 net/af_xdp: not in enabled drivers build config 00:35:03.036 net/ark: not in enabled drivers build config 00:35:03.036 net/atlantic: not in enabled drivers build config 00:35:03.036 net/avp: not in enabled drivers build config 00:35:03.036 net/axgbe: not in enabled drivers build config 00:35:03.036 net/bnx2x: not in enabled drivers build config 00:35:03.036 net/bnxt: not in enabled drivers build config 00:35:03.036 net/bonding: not in enabled drivers build config 00:35:03.036 net/cnxk: not in enabled drivers build config 00:35:03.036 net/cpfl: not in enabled drivers build config 00:35:03.036 net/cxgbe: not in enabled drivers build config 00:35:03.036 net/dpaa: not in enabled drivers build config 00:35:03.036 net/dpaa2: not in enabled drivers build config 00:35:03.036 net/e1000: not in enabled drivers build config 00:35:03.036 net/ena: not in enabled drivers build config 00:35:03.036 net/enetc: not in enabled drivers build config 00:35:03.036 net/enetfec: not in enabled drivers build config 00:35:03.036 net/enic: not in enabled drivers build config 00:35:03.036 net/failsafe: not in enabled drivers build config 00:35:03.036 net/fm10k: not in enabled drivers build config 00:35:03.036 net/gve: not in enabled drivers build config 00:35:03.036 net/hinic: not in enabled drivers build config 00:35:03.036 net/hns3: not in enabled drivers build config 00:35:03.036 net/i40e: not in enabled drivers build config 00:35:03.036 net/iavf: not in enabled drivers build config 00:35:03.036 net/ice: not in enabled drivers build config 00:35:03.036 net/idpf: not in enabled drivers build config 00:35:03.036 net/igc: not in enabled drivers build config 00:35:03.036 net/ionic: not in enabled drivers build config 00:35:03.036 net/ipn3ke: not in enabled drivers build config 00:35:03.036 net/ixgbe: not in enabled drivers build config 00:35:03.036 net/mana: not in enabled drivers build config 00:35:03.036 net/memif: not in enabled drivers build config 00:35:03.036 net/mlx4: not in enabled drivers build config 00:35:03.036 net/mlx5: not in enabled drivers build config 00:35:03.036 net/mvneta: not in enabled drivers build config 00:35:03.036 net/mvpp2: not in enabled drivers build config 00:35:03.036 net/netvsc: not in enabled drivers build config 00:35:03.036 net/nfb: not in enabled drivers build config 00:35:03.036 net/nfp: not in enabled drivers build config 00:35:03.036 net/ngbe: not in enabled drivers build config 00:35:03.036 net/null: not in enabled drivers build config 00:35:03.036 net/octeontx: not in enabled drivers build config 00:35:03.036 net/octeon_ep: not in enabled drivers build config 00:35:03.036 net/pcap: not in enabled drivers build config 00:35:03.036 net/pfe: not in enabled drivers build config 00:35:03.036 net/qede: not in enabled drivers build config 00:35:03.036 net/ring: not in enabled drivers build config 00:35:03.036 net/sfc: not in enabled drivers build config 00:35:03.036 net/softnic: not in enabled drivers build config 00:35:03.036 net/tap: not in enabled drivers build config 00:35:03.036 net/thunderx: not in enabled drivers build config 00:35:03.036 net/txgbe: not in enabled drivers build config 00:35:03.036 net/vdev_netvsc: not in enabled drivers build config 00:35:03.036 net/vhost: not in enabled drivers build config 00:35:03.036 net/virtio: not in enabled drivers build config 00:35:03.036 net/vmxnet3: not in enabled drivers build config 00:35:03.036 raw/*: missing internal dependency, "rawdev" 00:35:03.036 crypto/armv8: not in enabled drivers build config 00:35:03.036 crypto/bcmfs: not in enabled drivers build config 00:35:03.036 crypto/caam_jr: not in enabled drivers build config 00:35:03.036 crypto/ccp: not in enabled drivers build config 00:35:03.036 crypto/cnxk: not in enabled drivers build config 00:35:03.036 crypto/dpaa_sec: not in enabled drivers build config 00:35:03.036 crypto/dpaa2_sec: not in enabled drivers build config 00:35:03.036 crypto/ipsec_mb: not in enabled drivers build config 00:35:03.036 crypto/mlx5: not in enabled drivers build config 00:35:03.036 crypto/mvsam: not in enabled drivers build config 00:35:03.036 crypto/nitrox: not in enabled drivers build config 00:35:03.036 crypto/null: not in enabled drivers build config 00:35:03.036 crypto/octeontx: not in enabled drivers build config 00:35:03.036 crypto/openssl: not in enabled drivers build config 00:35:03.036 crypto/scheduler: not in enabled drivers build config 00:35:03.036 crypto/uadk: not in enabled drivers build config 00:35:03.036 crypto/virtio: not in enabled drivers build config 00:35:03.036 compress/isal: not in enabled drivers build config 00:35:03.036 compress/mlx5: not in enabled drivers build config 00:35:03.036 compress/nitrox: not in enabled drivers build config 00:35:03.036 compress/octeontx: not in enabled drivers build config 00:35:03.036 compress/zlib: not in enabled drivers build config 00:35:03.036 regex/*: missing internal dependency, "regexdev" 00:35:03.036 ml/*: missing internal dependency, "mldev" 00:35:03.036 vdpa/ifc: not in enabled drivers build config 00:35:03.036 vdpa/mlx5: not in enabled drivers build config 00:35:03.036 vdpa/nfp: not in enabled drivers build config 00:35:03.036 vdpa/sfc: not in enabled drivers build config 00:35:03.036 event/*: missing internal dependency, "eventdev" 00:35:03.036 baseband/*: missing internal dependency, "bbdev" 00:35:03.036 gpu/*: missing internal dependency, "gpudev" 00:35:03.036 00:35:03.036 00:35:03.036 Build targets in project: 85 00:35:03.036 00:35:03.036 DPDK 24.03.0 00:35:03.036 00:35:03.036 User defined options 00:35:03.036 buildtype : debug 00:35:03.036 default_library : shared 00:35:03.036 libdir : lib 00:35:03.036 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:03.036 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:35:03.036 c_link_args : 00:35:03.036 cpu_instruction_set: native 00:35:03.036 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:35:03.036 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:35:03.036 enable_docs : false 00:35:03.036 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:35:03.036 enable_kmods : false 00:35:03.036 max_lcores : 128 00:35:03.036 tests : false 00:35:03.036 00:35:03.036 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:35:03.036 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:35:03.036 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:35:03.036 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:35:03.036 [3/268] Linking static target lib/librte_kvargs.a 00:35:03.036 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:35:03.036 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:35:03.036 [6/268] Linking static target lib/librte_log.a 00:35:03.036 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:35:03.294 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:35:03.295 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:35:03.295 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:35:03.295 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:35:03.552 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:35:03.552 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:35:03.552 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:35:03.552 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:35:03.552 [16/268] Linking static target lib/librte_telemetry.a 00:35:03.552 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:35:03.552 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:35:03.809 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:35:03.809 [20/268] Linking target lib/librte_log.so.24.1 00:35:04.066 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:35:04.066 [22/268] Linking target lib/librte_kvargs.so.24.1 00:35:04.066 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:35:04.066 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:35:04.322 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:35:04.322 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:35:04.322 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:35:04.322 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:35:04.579 [29/268] Linking target lib/librte_telemetry.so.24.1 00:35:04.579 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:35:04.579 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:35:04.579 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:35:04.579 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:35:04.579 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:35:04.836 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:35:04.836 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:35:05.093 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:35:05.093 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:35:05.093 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:35:05.093 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:35:05.093 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:35:05.351 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:35:05.351 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:35:05.351 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:35:05.351 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:35:05.608 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:35:05.608 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:35:05.865 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:35:05.865 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:35:05.865 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:35:05.865 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:35:05.865 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:35:06.123 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:35:06.123 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:35:06.380 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:35:06.380 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:35:06.637 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:35:06.637 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:35:06.637 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:35:06.894 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:35:06.894 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:35:06.894 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:35:06.894 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:35:07.152 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:35:07.152 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:35:07.152 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:35:07.410 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:35:07.668 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:35:07.668 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:35:07.668 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:35:07.668 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:35:07.668 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:35:07.668 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:35:07.926 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:35:07.926 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:35:07.926 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:35:08.184 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:35:08.184 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:35:08.184 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:35:08.184 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:35:08.441 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:35:08.441 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:35:08.699 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:35:08.699 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:35:08.699 [85/268] Linking static target lib/librte_ring.a 00:35:08.699 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:35:08.699 [87/268] Linking static target lib/librte_rcu.a 00:35:08.699 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:35:08.699 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:35:08.956 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:35:08.956 [91/268] Linking static target lib/librte_eal.a 00:35:08.956 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:35:08.956 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:35:09.213 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:35:09.213 [95/268] Linking static target lib/librte_mempool.a 00:35:09.213 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:35:09.213 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:35:09.213 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:35:09.213 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:35:09.213 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:35:09.778 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:35:09.778 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:35:09.778 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:35:09.778 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:35:09.778 [105/268] Linking static target lib/librte_mbuf.a 00:35:09.778 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:35:10.035 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:35:10.035 [108/268] Linking static target lib/librte_meter.a 00:35:10.035 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:35:10.293 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:35:10.293 [111/268] Linking static target lib/librte_net.a 00:35:10.293 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:35:10.551 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:35:10.551 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:35:10.551 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:35:10.551 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:35:10.809 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:35:10.809 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:35:10.809 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:35:11.375 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:35:11.375 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:35:11.375 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:35:11.375 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:35:11.633 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:35:11.633 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:35:11.633 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:35:11.633 [127/268] Linking static target lib/librte_pci.a 00:35:11.890 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:35:11.890 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:35:11.890 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:35:11.890 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:35:11.890 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:35:11.890 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:35:11.890 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:35:11.890 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:35:12.148 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:35:12.148 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:35:12.148 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:35:12.148 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:35:12.148 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:35:12.148 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:35:12.148 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:35:12.148 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:35:12.148 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:35:12.406 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:35:12.406 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:35:12.406 [147/268] Linking static target lib/librte_cmdline.a 00:35:12.664 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:35:12.664 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:35:12.664 [150/268] Linking static target lib/librte_ethdev.a 00:35:12.921 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:35:12.921 [152/268] Linking static target lib/librte_timer.a 00:35:12.921 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:35:13.179 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:35:13.179 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:35:13.179 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:35:13.179 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:35:13.179 [158/268] Linking static target lib/librte_hash.a 00:35:13.179 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:35:13.436 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:35:13.693 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:35:13.693 [162/268] Linking static target lib/librte_compressdev.a 00:35:13.694 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:35:13.694 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:35:13.952 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:35:13.952 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:35:13.952 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:35:13.952 [168/268] Linking static target lib/librte_dmadev.a 00:35:14.237 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:35:14.237 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:35:14.237 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:35:14.507 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:35:14.507 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:35:14.507 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:35:14.507 [175/268] Linking static target lib/librte_cryptodev.a 00:35:14.507 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:35:14.764 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:14.764 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:35:15.022 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:15.022 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:35:15.022 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:35:15.022 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:35:15.022 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:35:15.022 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:35:15.281 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:35:15.281 [186/268] Linking static target lib/librte_power.a 00:35:15.539 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:35:15.539 [188/268] Linking static target lib/librte_reorder.a 00:35:15.797 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:35:15.797 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:35:15.797 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:35:15.797 [192/268] Linking static target lib/librte_security.a 00:35:16.055 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:35:16.055 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:35:16.055 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:35:16.620 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:35:16.620 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:35:16.620 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:35:16.876 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:35:16.876 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:35:16.876 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:35:16.876 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:17.441 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:35:17.441 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:35:17.441 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:35:17.699 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:35:17.699 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:35:17.699 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:35:17.699 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:35:17.699 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:35:17.699 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:35:17.699 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:35:17.958 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:35:17.958 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:35:17.958 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:35:17.958 [216/268] Linking static target drivers/librte_bus_pci.a 00:35:17.958 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:35:17.958 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:35:17.958 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:35:17.958 [220/268] Linking static target drivers/librte_bus_vdev.a 00:35:18.216 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:35:18.216 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:35:18.216 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:18.474 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:35:18.474 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:35:18.474 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:35:18.474 [227/268] Linking static target drivers/librte_mempool_ring.a 00:35:18.474 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:35:19.409 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:35:19.409 [230/268] Linking static target lib/librte_vhost.a 00:35:19.976 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:35:19.976 [232/268] Linking target lib/librte_eal.so.24.1 00:35:20.234 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:35:20.234 [234/268] Linking target lib/librte_pci.so.24.1 00:35:20.234 [235/268] Linking target lib/librte_timer.so.24.1 00:35:20.234 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:35:20.234 [237/268] Linking target lib/librte_meter.so.24.1 00:35:20.234 [238/268] Linking target lib/librte_dmadev.so.24.1 00:35:20.234 [239/268] Linking target lib/librte_ring.so.24.1 00:35:20.234 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:35:20.234 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:35:20.234 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:35:20.234 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:35:20.234 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:35:20.234 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:35:20.491 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:20.491 [247/268] Linking target lib/librte_mempool.so.24.1 00:35:20.491 [248/268] Linking target lib/librte_rcu.so.24.1 00:35:20.491 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:35:20.491 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:35:20.491 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:35:20.491 [252/268] Linking target lib/librte_mbuf.so.24.1 00:35:20.750 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:35:20.750 [254/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:20.750 [255/268] Linking target lib/librte_net.so.24.1 00:35:20.750 [256/268] Linking target lib/librte_compressdev.so.24.1 00:35:20.750 [257/268] Linking target lib/librte_reorder.so.24.1 00:35:20.750 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:35:21.008 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:35:21.008 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:35:21.008 [261/268] Linking target lib/librte_hash.so.24.1 00:35:21.008 [262/268] Linking target lib/librte_cmdline.so.24.1 00:35:21.008 [263/268] Linking target lib/librte_security.so.24.1 00:35:21.008 [264/268] Linking target lib/librte_ethdev.so.24.1 00:35:21.008 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:35:21.008 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:35:21.267 [267/268] Linking target lib/librte_power.so.24.1 00:35:21.267 [268/268] Linking target lib/librte_vhost.so.24.1 00:35:21.267 INFO: autodetecting backend as ninja 00:35:21.267 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:35:43.192 CC lib/ut_mock/mock.o 00:35:43.192 CC lib/ut/ut.o 00:35:43.192 CC lib/log/log.o 00:35:43.192 CC lib/log/log_flags.o 00:35:43.192 CC lib/log/log_deprecated.o 00:35:43.450 LIB libspdk_ut.a 00:35:43.450 LIB libspdk_ut_mock.a 00:35:43.450 LIB libspdk_log.a 00:35:43.450 SO libspdk_ut_mock.so.6.0 00:35:43.450 SO libspdk_ut.so.2.0 00:35:43.450 SO libspdk_log.so.7.1 00:35:43.709 SYMLINK libspdk_ut_mock.so 00:35:43.709 SYMLINK libspdk_ut.so 00:35:43.709 SYMLINK libspdk_log.so 00:35:43.709 CC lib/util/base64.o 00:35:43.709 CC lib/util/bit_array.o 00:35:43.709 CC lib/util/cpuset.o 00:35:43.709 CC lib/util/crc16.o 00:35:43.709 CC lib/ioat/ioat.o 00:35:43.709 CC lib/util/crc32.o 00:35:43.709 CC lib/util/crc32c.o 00:35:43.967 CXX lib/trace_parser/trace.o 00:35:43.967 CC lib/dma/dma.o 00:35:43.967 CC lib/vfio_user/host/vfio_user_pci.o 00:35:43.967 CC lib/vfio_user/host/vfio_user.o 00:35:43.967 CC lib/util/crc32_ieee.o 00:35:43.967 CC lib/util/crc64.o 00:35:43.967 CC lib/util/dif.o 00:35:44.224 CC lib/util/fd.o 00:35:44.224 LIB libspdk_dma.a 00:35:44.224 CC lib/util/fd_group.o 00:35:44.224 SO libspdk_dma.so.5.0 00:35:44.224 CC lib/util/file.o 00:35:44.224 LIB libspdk_ioat.a 00:35:44.224 CC lib/util/hexlify.o 00:35:44.224 CC lib/util/iov.o 00:35:44.224 SO libspdk_ioat.so.7.0 00:35:44.224 SYMLINK libspdk_dma.so 00:35:44.224 CC lib/util/math.o 00:35:44.224 CC lib/util/net.o 00:35:44.224 LIB libspdk_vfio_user.a 00:35:44.224 CC lib/util/pipe.o 00:35:44.482 SYMLINK libspdk_ioat.so 00:35:44.482 CC lib/util/strerror_tls.o 00:35:44.482 CC lib/util/string.o 00:35:44.482 SO libspdk_vfio_user.so.5.0 00:35:44.482 CC lib/util/uuid.o 00:35:44.482 CC lib/util/xor.o 00:35:44.482 SYMLINK libspdk_vfio_user.so 00:35:44.482 CC lib/util/md5.o 00:35:44.482 CC lib/util/zipf.o 00:35:44.740 LIB libspdk_util.a 00:35:44.740 SO libspdk_util.so.10.1 00:35:44.998 LIB libspdk_trace_parser.a 00:35:44.998 SYMLINK libspdk_util.so 00:35:44.998 SO libspdk_trace_parser.so.6.0 00:35:44.998 SYMLINK libspdk_trace_parser.so 00:35:44.998 CC lib/conf/conf.o 00:35:44.998 CC lib/json/json_parse.o 00:35:44.998 CC lib/json/json_util.o 00:35:44.998 CC lib/rdma_utils/rdma_utils.o 00:35:44.998 CC lib/json/json_write.o 00:35:44.998 CC lib/idxd/idxd.o 00:35:44.998 CC lib/idxd/idxd_user.o 00:35:44.998 CC lib/idxd/idxd_kernel.o 00:35:44.998 CC lib/env_dpdk/env.o 00:35:44.998 CC lib/vmd/vmd.o 00:35:45.256 CC lib/vmd/led.o 00:35:45.256 CC lib/env_dpdk/memory.o 00:35:45.256 CC lib/env_dpdk/pci.o 00:35:45.256 LIB libspdk_conf.a 00:35:45.256 CC lib/env_dpdk/init.o 00:35:45.256 SO libspdk_conf.so.6.0 00:35:45.513 LIB libspdk_rdma_utils.a 00:35:45.513 LIB libspdk_json.a 00:35:45.513 SO libspdk_rdma_utils.so.1.0 00:35:45.513 SO libspdk_json.so.6.0 00:35:45.513 SYMLINK libspdk_conf.so 00:35:45.513 CC lib/env_dpdk/threads.o 00:35:45.513 CC lib/env_dpdk/pci_ioat.o 00:35:45.513 SYMLINK libspdk_rdma_utils.so 00:35:45.513 CC lib/env_dpdk/pci_virtio.o 00:35:45.513 SYMLINK libspdk_json.so 00:35:45.513 CC lib/env_dpdk/pci_vmd.o 00:35:45.513 CC lib/env_dpdk/pci_idxd.o 00:35:45.513 CC lib/env_dpdk/pci_event.o 00:35:45.770 LIB libspdk_idxd.a 00:35:45.770 CC lib/env_dpdk/sigbus_handler.o 00:35:45.770 SO libspdk_idxd.so.12.1 00:35:45.770 CC lib/env_dpdk/pci_dpdk.o 00:35:45.770 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:45.770 SYMLINK libspdk_idxd.so 00:35:45.770 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:45.770 CC lib/rdma_provider/common.o 00:35:45.770 CC lib/rdma_provider/rdma_provider_verbs.o 00:35:45.770 CC lib/jsonrpc/jsonrpc_server.o 00:35:45.770 LIB libspdk_vmd.a 00:35:45.771 SO libspdk_vmd.so.6.0 00:35:45.771 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:46.029 SYMLINK libspdk_vmd.so 00:35:46.029 CC lib/jsonrpc/jsonrpc_client.o 00:35:46.029 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:46.029 LIB libspdk_rdma_provider.a 00:35:46.029 SO libspdk_rdma_provider.so.7.0 00:35:46.029 SYMLINK libspdk_rdma_provider.so 00:35:46.287 LIB libspdk_jsonrpc.a 00:35:46.287 SO libspdk_jsonrpc.so.6.0 00:35:46.287 SYMLINK libspdk_jsonrpc.so 00:35:46.545 LIB libspdk_env_dpdk.a 00:35:46.545 CC lib/rpc/rpc.o 00:35:46.545 SO libspdk_env_dpdk.so.15.1 00:35:46.804 SYMLINK libspdk_env_dpdk.so 00:35:46.804 LIB libspdk_rpc.a 00:35:46.804 SO libspdk_rpc.so.6.0 00:35:46.804 SYMLINK libspdk_rpc.so 00:35:47.062 CC lib/keyring/keyring_rpc.o 00:35:47.062 CC lib/keyring/keyring.o 00:35:47.062 CC lib/notify/notify.o 00:35:47.062 CC lib/notify/notify_rpc.o 00:35:47.062 CC lib/trace/trace.o 00:35:47.062 CC lib/trace/trace_flags.o 00:35:47.062 CC lib/trace/trace_rpc.o 00:35:47.321 LIB libspdk_notify.a 00:35:47.321 SO libspdk_notify.so.6.0 00:35:47.321 LIB libspdk_keyring.a 00:35:47.321 SYMLINK libspdk_notify.so 00:35:47.321 SO libspdk_keyring.so.2.0 00:35:47.321 LIB libspdk_trace.a 00:35:47.580 SO libspdk_trace.so.11.0 00:35:47.580 SYMLINK libspdk_keyring.so 00:35:47.580 SYMLINK libspdk_trace.so 00:35:47.840 CC lib/sock/sock.o 00:35:47.840 CC lib/sock/sock_rpc.o 00:35:47.840 CC lib/thread/thread.o 00:35:47.840 CC lib/thread/iobuf.o 00:35:48.406 LIB libspdk_sock.a 00:35:48.406 SO libspdk_sock.so.10.0 00:35:48.406 SYMLINK libspdk_sock.so 00:35:48.664 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:48.664 CC lib/nvme/nvme_ctrlr.o 00:35:48.664 CC lib/nvme/nvme_fabric.o 00:35:48.664 CC lib/nvme/nvme_ns_cmd.o 00:35:48.664 CC lib/nvme/nvme_ns.o 00:35:48.664 CC lib/nvme/nvme_pcie_common.o 00:35:48.664 CC lib/nvme/nvme.o 00:35:48.664 CC lib/nvme/nvme_pcie.o 00:35:48.664 CC lib/nvme/nvme_qpair.o 00:35:49.598 CC lib/nvme/nvme_quirks.o 00:35:49.598 LIB libspdk_thread.a 00:35:49.598 SO libspdk_thread.so.11.0 00:35:49.598 CC lib/nvme/nvme_transport.o 00:35:49.598 CC lib/nvme/nvme_discovery.o 00:35:49.598 SYMLINK libspdk_thread.so 00:35:49.598 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:49.598 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:49.598 CC lib/nvme/nvme_tcp.o 00:35:49.598 CC lib/nvme/nvme_opal.o 00:35:49.856 CC lib/nvme/nvme_io_msg.o 00:35:49.856 CC lib/nvme/nvme_poll_group.o 00:35:50.113 CC lib/nvme/nvme_zns.o 00:35:50.113 CC lib/nvme/nvme_stubs.o 00:35:50.371 CC lib/nvme/nvme_auth.o 00:35:50.371 CC lib/nvme/nvme_cuse.o 00:35:50.371 CC lib/blob/blobstore.o 00:35:50.371 CC lib/accel/accel.o 00:35:50.628 CC lib/init/json_config.o 00:35:50.628 CC lib/init/subsystem.o 00:35:50.628 CC lib/init/subsystem_rpc.o 00:35:50.886 CC lib/nvme/nvme_rdma.o 00:35:50.886 CC lib/blob/request.o 00:35:50.886 CC lib/init/rpc.o 00:35:50.886 CC lib/virtio/virtio.o 00:35:51.220 CC lib/virtio/virtio_vhost_user.o 00:35:51.220 LIB libspdk_init.a 00:35:51.220 SO libspdk_init.so.6.0 00:35:51.220 CC lib/virtio/virtio_vfio_user.o 00:35:51.220 CC lib/accel/accel_rpc.o 00:35:51.220 SYMLINK libspdk_init.so 00:35:51.220 CC lib/blob/zeroes.o 00:35:51.220 CC lib/blob/blob_bs_dev.o 00:35:51.488 CC lib/virtio/virtio_pci.o 00:35:51.488 CC lib/accel/accel_sw.o 00:35:51.488 CC lib/fsdev/fsdev.o 00:35:51.488 CC lib/fsdev/fsdev_io.o 00:35:51.488 CC lib/event/app.o 00:35:51.488 CC lib/fsdev/fsdev_rpc.o 00:35:51.488 CC lib/event/reactor.o 00:35:51.488 CC lib/event/log_rpc.o 00:35:51.488 CC lib/event/app_rpc.o 00:35:51.758 LIB libspdk_virtio.a 00:35:51.758 LIB libspdk_accel.a 00:35:51.758 CC lib/event/scheduler_static.o 00:35:51.758 SO libspdk_virtio.so.7.0 00:35:51.758 SO libspdk_accel.so.16.0 00:35:51.758 SYMLINK libspdk_virtio.so 00:35:51.758 SYMLINK libspdk_accel.so 00:35:52.016 LIB libspdk_event.a 00:35:52.016 SO libspdk_event.so.14.0 00:35:52.016 CC lib/bdev/bdev.o 00:35:52.016 CC lib/bdev/bdev_zone.o 00:35:52.016 CC lib/bdev/bdev_rpc.o 00:35:52.016 CC lib/bdev/part.o 00:35:52.016 CC lib/bdev/scsi_nvme.o 00:35:52.016 SYMLINK libspdk_event.so 00:35:52.016 LIB libspdk_fsdev.a 00:35:52.274 SO libspdk_fsdev.so.2.0 00:35:52.274 LIB libspdk_nvme.a 00:35:52.274 SYMLINK libspdk_fsdev.so 00:35:52.274 SO libspdk_nvme.so.15.0 00:35:52.532 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:35:52.532 SYMLINK libspdk_nvme.so 00:35:53.098 LIB libspdk_fuse_dispatcher.a 00:35:53.098 SO libspdk_fuse_dispatcher.so.1.0 00:35:53.098 SYMLINK libspdk_fuse_dispatcher.so 00:35:53.356 LIB libspdk_blob.a 00:35:53.614 SO libspdk_blob.so.12.0 00:35:53.614 SYMLINK libspdk_blob.so 00:35:53.872 CC lib/blobfs/blobfs.o 00:35:53.872 CC lib/blobfs/tree.o 00:35:53.872 CC lib/lvol/lvol.o 00:35:54.806 LIB libspdk_blobfs.a 00:35:54.806 LIB libspdk_bdev.a 00:35:54.806 SO libspdk_blobfs.so.11.0 00:35:54.806 SO libspdk_bdev.so.17.0 00:35:54.806 LIB libspdk_lvol.a 00:35:54.806 SYMLINK libspdk_blobfs.so 00:35:54.806 SO libspdk_lvol.so.11.0 00:35:54.806 SYMLINK libspdk_bdev.so 00:35:55.065 SYMLINK libspdk_lvol.so 00:35:55.065 CC lib/nvmf/ctrlr.o 00:35:55.065 CC lib/nvmf/ctrlr_discovery.o 00:35:55.065 CC lib/nvmf/ctrlr_bdev.o 00:35:55.065 CC lib/nvmf/subsystem.o 00:35:55.065 CC lib/nvmf/nvmf.o 00:35:55.065 CC lib/ftl/ftl_core.o 00:35:55.065 CC lib/nvmf/nvmf_rpc.o 00:35:55.065 CC lib/scsi/dev.o 00:35:55.065 CC lib/ublk/ublk.o 00:35:55.065 CC lib/nbd/nbd.o 00:35:55.323 CC lib/scsi/lun.o 00:35:55.582 CC lib/ftl/ftl_init.o 00:35:55.582 CC lib/nbd/nbd_rpc.o 00:35:55.582 CC lib/nvmf/transport.o 00:35:55.841 CC lib/scsi/port.o 00:35:55.841 CC lib/ftl/ftl_layout.o 00:35:55.841 LIB libspdk_nbd.a 00:35:55.841 CC lib/ublk/ublk_rpc.o 00:35:55.841 SO libspdk_nbd.so.7.0 00:35:55.841 SYMLINK libspdk_nbd.so 00:35:55.841 CC lib/ftl/ftl_debug.o 00:35:55.841 CC lib/nvmf/tcp.o 00:35:55.841 CC lib/scsi/scsi.o 00:35:55.841 LIB libspdk_ublk.a 00:35:56.100 SO libspdk_ublk.so.3.0 00:35:56.100 SYMLINK libspdk_ublk.so 00:35:56.100 CC lib/nvmf/stubs.o 00:35:56.100 CC lib/scsi/scsi_bdev.o 00:35:56.100 CC lib/ftl/ftl_io.o 00:35:56.100 CC lib/ftl/ftl_sb.o 00:35:56.100 CC lib/scsi/scsi_pr.o 00:35:56.100 CC lib/scsi/scsi_rpc.o 00:35:56.359 CC lib/nvmf/mdns_server.o 00:35:56.359 CC lib/ftl/ftl_l2p.o 00:35:56.359 CC lib/scsi/task.o 00:35:56.359 CC lib/ftl/ftl_l2p_flat.o 00:35:56.359 CC lib/ftl/ftl_nv_cache.o 00:35:56.359 CC lib/nvmf/rdma.o 00:35:56.617 CC lib/nvmf/auth.o 00:35:56.617 CC lib/ftl/ftl_band.o 00:35:56.617 CC lib/ftl/ftl_band_ops.o 00:35:56.617 LIB libspdk_scsi.a 00:35:56.617 CC lib/ftl/ftl_writer.o 00:35:56.617 SO libspdk_scsi.so.9.0 00:35:56.617 CC lib/ftl/ftl_rq.o 00:35:56.876 SYMLINK libspdk_scsi.so 00:35:56.876 CC lib/ftl/ftl_reloc.o 00:35:56.876 CC lib/ftl/ftl_l2p_cache.o 00:35:56.876 CC lib/ftl/ftl_p2l.o 00:35:56.876 CC lib/ftl/ftl_p2l_log.o 00:35:56.876 CC lib/ftl/mngt/ftl_mngt.o 00:35:57.135 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:57.135 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:57.393 CC lib/iscsi/conn.o 00:35:57.393 CC lib/iscsi/init_grp.o 00:35:57.393 CC lib/vhost/vhost.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:57.393 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:57.652 CC lib/iscsi/iscsi.o 00:35:57.652 CC lib/iscsi/param.o 00:35:57.652 CC lib/iscsi/portal_grp.o 00:35:57.652 CC lib/iscsi/tgt_node.o 00:35:57.652 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:57.652 CC lib/iscsi/iscsi_subsystem.o 00:35:57.910 CC lib/iscsi/iscsi_rpc.o 00:35:57.910 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:57.910 CC lib/iscsi/task.o 00:35:57.910 CC lib/vhost/vhost_rpc.o 00:35:58.168 CC lib/vhost/vhost_scsi.o 00:35:58.168 CC lib/vhost/vhost_blk.o 00:35:58.168 CC lib/vhost/rte_vhost_user.o 00:35:58.168 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:58.168 CC lib/ftl/utils/ftl_conf.o 00:35:58.426 CC lib/ftl/utils/ftl_md.o 00:35:58.426 CC lib/ftl/utils/ftl_mempool.o 00:35:58.426 CC lib/ftl/utils/ftl_bitmap.o 00:35:58.426 CC lib/ftl/utils/ftl_property.o 00:35:58.426 LIB libspdk_nvmf.a 00:35:58.684 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:58.684 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:58.684 SO libspdk_nvmf.so.20.0 00:35:58.684 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:58.684 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:58.942 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:58.942 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:58.942 SYMLINK libspdk_nvmf.so 00:35:58.942 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:35:58.942 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:58.942 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:58.942 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:58.942 LIB libspdk_iscsi.a 00:35:58.942 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:59.200 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:35:59.200 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:35:59.200 CC lib/ftl/base/ftl_base_dev.o 00:35:59.200 SO libspdk_iscsi.so.8.0 00:35:59.200 CC lib/ftl/base/ftl_base_bdev.o 00:35:59.200 CC lib/ftl/ftl_trace.o 00:35:59.200 SYMLINK libspdk_iscsi.so 00:35:59.200 LIB libspdk_vhost.a 00:35:59.459 SO libspdk_vhost.so.8.0 00:35:59.459 LIB libspdk_ftl.a 00:35:59.459 SYMLINK libspdk_vhost.so 00:35:59.717 SO libspdk_ftl.so.9.0 00:35:59.975 SYMLINK libspdk_ftl.so 00:36:00.234 CC module/env_dpdk/env_dpdk_rpc.o 00:36:00.234 CC module/accel/dsa/accel_dsa.o 00:36:00.234 CC module/keyring/linux/keyring.o 00:36:00.234 CC module/keyring/file/keyring.o 00:36:00.234 CC module/fsdev/aio/fsdev_aio.o 00:36:00.234 CC module/sock/posix/posix.o 00:36:00.234 CC module/blob/bdev/blob_bdev.o 00:36:00.234 CC module/accel/ioat/accel_ioat.o 00:36:00.234 CC module/scheduler/dynamic/scheduler_dynamic.o 00:36:00.234 CC module/accel/error/accel_error.o 00:36:00.234 LIB libspdk_env_dpdk_rpc.a 00:36:00.493 SO libspdk_env_dpdk_rpc.so.6.0 00:36:00.493 SYMLINK libspdk_env_dpdk_rpc.so 00:36:00.493 CC module/accel/ioat/accel_ioat_rpc.o 00:36:00.493 CC module/keyring/file/keyring_rpc.o 00:36:00.493 CC module/keyring/linux/keyring_rpc.o 00:36:00.493 CC module/accel/dsa/accel_dsa_rpc.o 00:36:00.493 LIB libspdk_scheduler_dynamic.a 00:36:00.493 CC module/accel/error/accel_error_rpc.o 00:36:00.493 SO libspdk_scheduler_dynamic.so.4.0 00:36:00.493 LIB libspdk_blob_bdev.a 00:36:00.493 LIB libspdk_accel_ioat.a 00:36:00.493 LIB libspdk_keyring_linux.a 00:36:00.493 SO libspdk_blob_bdev.so.12.0 00:36:00.493 LIB libspdk_keyring_file.a 00:36:00.493 SYMLINK libspdk_scheduler_dynamic.so 00:36:00.493 SO libspdk_accel_ioat.so.6.0 00:36:00.751 SO libspdk_keyring_linux.so.1.0 00:36:00.751 SO libspdk_keyring_file.so.2.0 00:36:00.751 LIB libspdk_accel_dsa.a 00:36:00.751 LIB libspdk_accel_error.a 00:36:00.751 SO libspdk_accel_dsa.so.5.0 00:36:00.751 SYMLINK libspdk_blob_bdev.so 00:36:00.751 SYMLINK libspdk_accel_ioat.so 00:36:00.751 SYMLINK libspdk_keyring_linux.so 00:36:00.751 SYMLINK libspdk_keyring_file.so 00:36:00.751 CC module/fsdev/aio/fsdev_aio_rpc.o 00:36:00.751 CC module/fsdev/aio/linux_aio_mgr.o 00:36:00.751 SO libspdk_accel_error.so.2.0 00:36:00.751 SYMLINK libspdk_accel_dsa.so 00:36:00.751 SYMLINK libspdk_accel_error.so 00:36:00.751 CC module/accel/iaa/accel_iaa.o 00:36:00.751 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:36:00.751 CC module/accel/iaa/accel_iaa_rpc.o 00:36:01.009 CC module/sock/uring/uring.o 00:36:01.009 CC module/scheduler/gscheduler/gscheduler.o 00:36:01.009 LIB libspdk_scheduler_dpdk_governor.a 00:36:01.009 LIB libspdk_fsdev_aio.a 00:36:01.009 SO libspdk_scheduler_dpdk_governor.so.4.0 00:36:01.009 SO libspdk_fsdev_aio.so.1.0 00:36:01.009 LIB libspdk_accel_iaa.a 00:36:01.009 LIB libspdk_sock_posix.a 00:36:01.009 SYMLINK libspdk_scheduler_dpdk_governor.so 00:36:01.009 SO libspdk_accel_iaa.so.3.0 00:36:01.009 CC module/bdev/delay/vbdev_delay.o 00:36:01.009 SO libspdk_sock_posix.so.6.0 00:36:01.009 LIB libspdk_scheduler_gscheduler.a 00:36:01.009 CC module/blobfs/bdev/blobfs_bdev.o 00:36:01.009 SYMLINK libspdk_fsdev_aio.so 00:36:01.009 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:36:01.009 CC module/bdev/error/vbdev_error.o 00:36:01.009 SYMLINK libspdk_accel_iaa.so 00:36:01.009 SO libspdk_scheduler_gscheduler.so.4.0 00:36:01.009 CC module/bdev/delay/vbdev_delay_rpc.o 00:36:01.268 SYMLINK libspdk_scheduler_gscheduler.so 00:36:01.268 SYMLINK libspdk_sock_posix.so 00:36:01.268 CC module/bdev/error/vbdev_error_rpc.o 00:36:01.268 CC module/bdev/gpt/gpt.o 00:36:01.268 CC module/bdev/lvol/vbdev_lvol.o 00:36:01.268 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:36:01.268 LIB libspdk_blobfs_bdev.a 00:36:01.268 SO libspdk_blobfs_bdev.so.6.0 00:36:01.268 CC module/bdev/malloc/bdev_malloc.o 00:36:01.268 CC module/bdev/malloc/bdev_malloc_rpc.o 00:36:01.268 LIB libspdk_bdev_error.a 00:36:01.526 SYMLINK libspdk_blobfs_bdev.so 00:36:01.526 CC module/bdev/gpt/vbdev_gpt.o 00:36:01.526 SO libspdk_bdev_error.so.6.0 00:36:01.526 LIB libspdk_bdev_delay.a 00:36:01.526 SYMLINK libspdk_bdev_error.so 00:36:01.526 CC module/bdev/null/bdev_null.o 00:36:01.526 SO libspdk_bdev_delay.so.6.0 00:36:01.526 CC module/bdev/null/bdev_null_rpc.o 00:36:01.526 CC module/bdev/nvme/bdev_nvme.o 00:36:01.526 SYMLINK libspdk_bdev_delay.so 00:36:01.526 LIB libspdk_sock_uring.a 00:36:01.526 SO libspdk_sock_uring.so.5.0 00:36:01.784 CC module/bdev/passthru/vbdev_passthru.o 00:36:01.784 LIB libspdk_bdev_gpt.a 00:36:01.784 SYMLINK libspdk_sock_uring.so 00:36:01.784 LIB libspdk_bdev_malloc.a 00:36:01.784 SO libspdk_bdev_gpt.so.6.0 00:36:01.784 CC module/bdev/raid/bdev_raid.o 00:36:01.784 SO libspdk_bdev_malloc.so.6.0 00:36:01.784 CC module/bdev/raid/bdev_raid_rpc.o 00:36:01.784 LIB libspdk_bdev_lvol.a 00:36:01.784 LIB libspdk_bdev_null.a 00:36:01.784 SO libspdk_bdev_lvol.so.6.0 00:36:01.784 SYMLINK libspdk_bdev_gpt.so 00:36:01.784 SO libspdk_bdev_null.so.6.0 00:36:01.784 SYMLINK libspdk_bdev_malloc.so 00:36:01.784 CC module/bdev/nvme/bdev_nvme_rpc.o 00:36:01.784 CC module/bdev/split/vbdev_split.o 00:36:01.784 SYMLINK libspdk_bdev_lvol.so 00:36:01.784 CC module/bdev/raid/bdev_raid_sb.o 00:36:01.784 CC module/bdev/zone_block/vbdev_zone_block.o 00:36:01.784 SYMLINK libspdk_bdev_null.so 00:36:01.784 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:36:02.042 CC module/bdev/nvme/nvme_rpc.o 00:36:02.042 CC module/bdev/nvme/bdev_mdns_client.o 00:36:02.042 CC module/bdev/uring/bdev_uring.o 00:36:02.042 CC module/bdev/split/vbdev_split_rpc.o 00:36:02.042 LIB libspdk_bdev_passthru.a 00:36:02.042 SO libspdk_bdev_passthru.so.6.0 00:36:02.300 SYMLINK libspdk_bdev_passthru.so 00:36:02.300 CC module/bdev/nvme/vbdev_opal.o 00:36:02.300 CC module/bdev/raid/raid0.o 00:36:02.300 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:36:02.300 LIB libspdk_bdev_split.a 00:36:02.300 SO libspdk_bdev_split.so.6.0 00:36:02.300 CC module/bdev/aio/bdev_aio.o 00:36:02.300 CC module/bdev/ftl/bdev_ftl.o 00:36:02.300 CC module/bdev/uring/bdev_uring_rpc.o 00:36:02.300 SYMLINK libspdk_bdev_split.so 00:36:02.300 CC module/bdev/aio/bdev_aio_rpc.o 00:36:02.300 LIB libspdk_bdev_zone_block.a 00:36:02.559 SO libspdk_bdev_zone_block.so.6.0 00:36:02.559 CC module/bdev/nvme/vbdev_opal_rpc.o 00:36:02.559 CC module/bdev/ftl/bdev_ftl_rpc.o 00:36:02.559 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:36:02.559 SYMLINK libspdk_bdev_zone_block.so 00:36:02.559 LIB libspdk_bdev_uring.a 00:36:02.559 SO libspdk_bdev_uring.so.6.0 00:36:02.559 SYMLINK libspdk_bdev_uring.so 00:36:02.559 CC module/bdev/raid/raid1.o 00:36:02.559 CC module/bdev/raid/concat.o 00:36:02.559 LIB libspdk_bdev_aio.a 00:36:02.817 CC module/bdev/iscsi/bdev_iscsi.o 00:36:02.817 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:36:02.817 SO libspdk_bdev_aio.so.6.0 00:36:02.817 CC module/bdev/virtio/bdev_virtio_scsi.o 00:36:02.817 LIB libspdk_bdev_ftl.a 00:36:02.817 CC module/bdev/virtio/bdev_virtio_blk.o 00:36:02.817 SO libspdk_bdev_ftl.so.6.0 00:36:02.817 SYMLINK libspdk_bdev_aio.so 00:36:02.817 CC module/bdev/virtio/bdev_virtio_rpc.o 00:36:02.817 SYMLINK libspdk_bdev_ftl.so 00:36:02.817 LIB libspdk_bdev_raid.a 00:36:03.075 SO libspdk_bdev_raid.so.6.0 00:36:03.075 LIB libspdk_bdev_iscsi.a 00:36:03.075 SYMLINK libspdk_bdev_raid.so 00:36:03.075 SO libspdk_bdev_iscsi.so.6.0 00:36:03.075 SYMLINK libspdk_bdev_iscsi.so 00:36:03.333 LIB libspdk_bdev_virtio.a 00:36:03.333 SO libspdk_bdev_virtio.so.6.0 00:36:03.333 SYMLINK libspdk_bdev_virtio.so 00:36:04.267 LIB libspdk_bdev_nvme.a 00:36:04.267 SO libspdk_bdev_nvme.so.7.1 00:36:04.267 SYMLINK libspdk_bdev_nvme.so 00:36:04.835 CC module/event/subsystems/vmd/vmd.o 00:36:04.835 CC module/event/subsystems/vmd/vmd_rpc.o 00:36:04.835 CC module/event/subsystems/keyring/keyring.o 00:36:04.835 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:36:04.835 CC module/event/subsystems/sock/sock.o 00:36:04.835 CC module/event/subsystems/scheduler/scheduler.o 00:36:04.835 CC module/event/subsystems/fsdev/fsdev.o 00:36:04.835 CC module/event/subsystems/iobuf/iobuf.o 00:36:04.835 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:36:04.835 LIB libspdk_event_keyring.a 00:36:04.835 LIB libspdk_event_vhost_blk.a 00:36:04.835 LIB libspdk_event_vmd.a 00:36:04.835 SO libspdk_event_keyring.so.1.0 00:36:04.835 LIB libspdk_event_fsdev.a 00:36:04.835 LIB libspdk_event_scheduler.a 00:36:04.835 LIB libspdk_event_sock.a 00:36:05.094 SO libspdk_event_vhost_blk.so.3.0 00:36:05.094 LIB libspdk_event_iobuf.a 00:36:05.094 SO libspdk_event_scheduler.so.4.0 00:36:05.094 SO libspdk_event_fsdev.so.1.0 00:36:05.094 SO libspdk_event_sock.so.5.0 00:36:05.094 SO libspdk_event_vmd.so.6.0 00:36:05.094 SYMLINK libspdk_event_keyring.so 00:36:05.094 SO libspdk_event_iobuf.so.3.0 00:36:05.094 SYMLINK libspdk_event_fsdev.so 00:36:05.094 SYMLINK libspdk_event_vhost_blk.so 00:36:05.094 SYMLINK libspdk_event_scheduler.so 00:36:05.094 SYMLINK libspdk_event_sock.so 00:36:05.094 SYMLINK libspdk_event_vmd.so 00:36:05.094 SYMLINK libspdk_event_iobuf.so 00:36:05.352 CC module/event/subsystems/accel/accel.o 00:36:05.352 LIB libspdk_event_accel.a 00:36:05.610 SO libspdk_event_accel.so.6.0 00:36:05.610 SYMLINK libspdk_event_accel.so 00:36:05.868 CC module/event/subsystems/bdev/bdev.o 00:36:06.127 LIB libspdk_event_bdev.a 00:36:06.127 SO libspdk_event_bdev.so.6.0 00:36:06.127 SYMLINK libspdk_event_bdev.so 00:36:06.386 CC module/event/subsystems/ublk/ublk.o 00:36:06.386 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:36:06.386 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:36:06.386 CC module/event/subsystems/nbd/nbd.o 00:36:06.386 CC module/event/subsystems/scsi/scsi.o 00:36:06.386 LIB libspdk_event_ublk.a 00:36:06.386 LIB libspdk_event_nbd.a 00:36:06.386 SO libspdk_event_ublk.so.3.0 00:36:06.643 LIB libspdk_event_scsi.a 00:36:06.643 SO libspdk_event_nbd.so.6.0 00:36:06.643 SO libspdk_event_scsi.so.6.0 00:36:06.643 SYMLINK libspdk_event_ublk.so 00:36:06.643 SYMLINK libspdk_event_nbd.so 00:36:06.643 LIB libspdk_event_nvmf.a 00:36:06.643 SYMLINK libspdk_event_scsi.so 00:36:06.643 SO libspdk_event_nvmf.so.6.0 00:36:06.643 SYMLINK libspdk_event_nvmf.so 00:36:06.900 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:36:06.900 CC module/event/subsystems/iscsi/iscsi.o 00:36:07.157 LIB libspdk_event_vhost_scsi.a 00:36:07.157 LIB libspdk_event_iscsi.a 00:36:07.157 SO libspdk_event_vhost_scsi.so.3.0 00:36:07.157 SO libspdk_event_iscsi.so.6.0 00:36:07.157 SYMLINK libspdk_event_vhost_scsi.so 00:36:07.157 SYMLINK libspdk_event_iscsi.so 00:36:07.450 SO libspdk.so.6.0 00:36:07.450 SYMLINK libspdk.so 00:36:07.712 CXX app/trace/trace.o 00:36:07.712 CC app/trace_record/trace_record.o 00:36:07.712 CC app/spdk_lspci/spdk_lspci.o 00:36:07.712 CC app/nvmf_tgt/nvmf_main.o 00:36:07.712 CC examples/interrupt_tgt/interrupt_tgt.o 00:36:07.712 CC app/iscsi_tgt/iscsi_tgt.o 00:36:07.712 CC app/spdk_tgt/spdk_tgt.o 00:36:07.712 CC examples/util/zipf/zipf.o 00:36:07.712 CC test/thread/poller_perf/poller_perf.o 00:36:07.712 CC examples/ioat/perf/perf.o 00:36:07.712 LINK spdk_lspci 00:36:07.973 LINK nvmf_tgt 00:36:07.973 LINK poller_perf 00:36:07.973 LINK interrupt_tgt 00:36:07.973 LINK zipf 00:36:07.973 LINK spdk_trace_record 00:36:07.973 LINK iscsi_tgt 00:36:07.973 LINK spdk_tgt 00:36:07.973 LINK ioat_perf 00:36:07.973 CC app/spdk_nvme_perf/perf.o 00:36:07.973 LINK spdk_trace 00:36:08.232 CC app/spdk_nvme_identify/identify.o 00:36:08.232 CC app/spdk_nvme_discover/discovery_aer.o 00:36:08.232 CC examples/ioat/verify/verify.o 00:36:08.232 CC app/spdk_top/spdk_top.o 00:36:08.232 CC test/dma/test_dma/test_dma.o 00:36:08.232 CC examples/sock/hello_world/hello_sock.o 00:36:08.232 CC examples/thread/thread/thread_ex.o 00:36:08.490 CC app/spdk_dd/spdk_dd.o 00:36:08.490 LINK spdk_nvme_discover 00:36:08.490 CC test/app/bdev_svc/bdev_svc.o 00:36:08.490 LINK verify 00:36:08.490 LINK hello_sock 00:36:08.749 LINK bdev_svc 00:36:08.749 LINK thread 00:36:08.749 TEST_HEADER include/spdk/accel.h 00:36:08.749 TEST_HEADER include/spdk/accel_module.h 00:36:08.749 TEST_HEADER include/spdk/assert.h 00:36:08.749 TEST_HEADER include/spdk/barrier.h 00:36:08.749 TEST_HEADER include/spdk/base64.h 00:36:08.749 TEST_HEADER include/spdk/bdev.h 00:36:08.749 TEST_HEADER include/spdk/bdev_module.h 00:36:08.749 TEST_HEADER include/spdk/bdev_zone.h 00:36:08.749 TEST_HEADER include/spdk/bit_array.h 00:36:08.749 TEST_HEADER include/spdk/bit_pool.h 00:36:08.749 CC app/fio/nvme/fio_plugin.o 00:36:08.749 TEST_HEADER include/spdk/blob_bdev.h 00:36:08.749 TEST_HEADER include/spdk/blobfs_bdev.h 00:36:08.749 TEST_HEADER include/spdk/blobfs.h 00:36:08.749 TEST_HEADER include/spdk/blob.h 00:36:08.749 TEST_HEADER include/spdk/conf.h 00:36:08.749 TEST_HEADER include/spdk/config.h 00:36:08.749 TEST_HEADER include/spdk/cpuset.h 00:36:08.749 TEST_HEADER include/spdk/crc16.h 00:36:08.749 TEST_HEADER include/spdk/crc32.h 00:36:08.749 TEST_HEADER include/spdk/crc64.h 00:36:08.749 TEST_HEADER include/spdk/dif.h 00:36:08.749 TEST_HEADER include/spdk/dma.h 00:36:08.749 TEST_HEADER include/spdk/endian.h 00:36:08.749 TEST_HEADER include/spdk/env_dpdk.h 00:36:08.749 TEST_HEADER include/spdk/env.h 00:36:08.749 TEST_HEADER include/spdk/event.h 00:36:08.749 TEST_HEADER include/spdk/fd_group.h 00:36:08.749 TEST_HEADER include/spdk/fd.h 00:36:08.749 TEST_HEADER include/spdk/file.h 00:36:08.749 TEST_HEADER include/spdk/fsdev.h 00:36:08.749 TEST_HEADER include/spdk/fsdev_module.h 00:36:08.749 TEST_HEADER include/spdk/ftl.h 00:36:08.749 TEST_HEADER include/spdk/fuse_dispatcher.h 00:36:08.749 TEST_HEADER include/spdk/gpt_spec.h 00:36:08.749 TEST_HEADER include/spdk/hexlify.h 00:36:08.749 TEST_HEADER include/spdk/histogram_data.h 00:36:08.749 TEST_HEADER include/spdk/idxd.h 00:36:08.749 TEST_HEADER include/spdk/idxd_spec.h 00:36:08.749 TEST_HEADER include/spdk/init.h 00:36:08.749 TEST_HEADER include/spdk/ioat.h 00:36:08.749 TEST_HEADER include/spdk/ioat_spec.h 00:36:08.749 TEST_HEADER include/spdk/iscsi_spec.h 00:36:08.749 TEST_HEADER include/spdk/json.h 00:36:08.749 TEST_HEADER include/spdk/jsonrpc.h 00:36:08.749 TEST_HEADER include/spdk/keyring.h 00:36:08.749 TEST_HEADER include/spdk/keyring_module.h 00:36:08.749 TEST_HEADER include/spdk/likely.h 00:36:08.749 TEST_HEADER include/spdk/log.h 00:36:08.749 TEST_HEADER include/spdk/lvol.h 00:36:08.749 TEST_HEADER include/spdk/md5.h 00:36:08.749 TEST_HEADER include/spdk/memory.h 00:36:08.749 TEST_HEADER include/spdk/mmio.h 00:36:08.749 TEST_HEADER include/spdk/nbd.h 00:36:08.749 TEST_HEADER include/spdk/net.h 00:36:08.749 TEST_HEADER include/spdk/notify.h 00:36:08.749 TEST_HEADER include/spdk/nvme.h 00:36:08.749 TEST_HEADER include/spdk/nvme_intel.h 00:36:08.749 TEST_HEADER include/spdk/nvme_ocssd.h 00:36:08.749 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:36:08.749 TEST_HEADER include/spdk/nvme_spec.h 00:36:08.749 TEST_HEADER include/spdk/nvme_zns.h 00:36:08.749 TEST_HEADER include/spdk/nvmf_cmd.h 00:36:08.749 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:36:08.749 TEST_HEADER include/spdk/nvmf.h 00:36:08.749 TEST_HEADER include/spdk/nvmf_spec.h 00:36:08.749 TEST_HEADER include/spdk/nvmf_transport.h 00:36:08.749 TEST_HEADER include/spdk/opal.h 00:36:09.008 TEST_HEADER include/spdk/opal_spec.h 00:36:09.008 TEST_HEADER include/spdk/pci_ids.h 00:36:09.008 TEST_HEADER include/spdk/pipe.h 00:36:09.008 TEST_HEADER include/spdk/queue.h 00:36:09.008 TEST_HEADER include/spdk/reduce.h 00:36:09.008 LINK test_dma 00:36:09.008 LINK spdk_dd 00:36:09.008 TEST_HEADER include/spdk/rpc.h 00:36:09.008 TEST_HEADER include/spdk/scheduler.h 00:36:09.008 TEST_HEADER include/spdk/scsi.h 00:36:09.008 TEST_HEADER include/spdk/scsi_spec.h 00:36:09.008 TEST_HEADER include/spdk/sock.h 00:36:09.008 TEST_HEADER include/spdk/stdinc.h 00:36:09.008 TEST_HEADER include/spdk/string.h 00:36:09.008 TEST_HEADER include/spdk/thread.h 00:36:09.008 TEST_HEADER include/spdk/trace.h 00:36:09.008 TEST_HEADER include/spdk/trace_parser.h 00:36:09.008 TEST_HEADER include/spdk/tree.h 00:36:09.008 TEST_HEADER include/spdk/ublk.h 00:36:09.008 TEST_HEADER include/spdk/util.h 00:36:09.008 TEST_HEADER include/spdk/uuid.h 00:36:09.008 TEST_HEADER include/spdk/version.h 00:36:09.008 TEST_HEADER include/spdk/vfio_user_pci.h 00:36:09.008 TEST_HEADER include/spdk/vfio_user_spec.h 00:36:09.008 TEST_HEADER include/spdk/vhost.h 00:36:09.008 TEST_HEADER include/spdk/vmd.h 00:36:09.008 TEST_HEADER include/spdk/xor.h 00:36:09.008 TEST_HEADER include/spdk/zipf.h 00:36:09.008 CXX test/cpp_headers/accel.o 00:36:09.008 CC test/app/histogram_perf/histogram_perf.o 00:36:09.008 LINK spdk_nvme_perf 00:36:09.008 LINK spdk_nvme_identify 00:36:09.008 CC examples/vmd/lsvmd/lsvmd.o 00:36:09.008 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:36:09.008 CXX test/cpp_headers/accel_module.o 00:36:09.008 LINK histogram_perf 00:36:09.267 CC test/app/jsoncat/jsoncat.o 00:36:09.267 LINK lsvmd 00:36:09.267 LINK spdk_top 00:36:09.267 CC test/app/stub/stub.o 00:36:09.267 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:36:09.267 LINK spdk_nvme 00:36:09.267 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:36:09.267 CXX test/cpp_headers/assert.o 00:36:09.267 LINK jsoncat 00:36:09.267 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:36:09.267 CC examples/vmd/led/led.o 00:36:09.526 LINK stub 00:36:09.526 LINK nvme_fuzz 00:36:09.526 CXX test/cpp_headers/barrier.o 00:36:09.526 LINK led 00:36:09.526 CC app/fio/bdev/fio_plugin.o 00:36:09.526 CC examples/idxd/perf/perf.o 00:36:09.799 CXX test/cpp_headers/base64.o 00:36:09.799 CC examples/fsdev/hello_world/hello_fsdev.o 00:36:09.799 CC examples/accel/perf/accel_perf.o 00:36:09.799 LINK vhost_fuzz 00:36:09.799 CC test/env/vtophys/vtophys.o 00:36:09.799 CC test/event/event_perf/event_perf.o 00:36:09.799 CC test/env/mem_callbacks/mem_callbacks.o 00:36:09.799 CXX test/cpp_headers/bdev.o 00:36:09.799 LINK idxd_perf 00:36:10.058 CXX test/cpp_headers/bdev_module.o 00:36:10.058 LINK vtophys 00:36:10.058 LINK hello_fsdev 00:36:10.058 LINK event_perf 00:36:10.058 CXX test/cpp_headers/bdev_zone.o 00:36:10.058 LINK spdk_bdev 00:36:10.316 CC test/rpc_client/rpc_client_test.o 00:36:10.316 LINK accel_perf 00:36:10.316 CC test/event/reactor/reactor.o 00:36:10.316 CXX test/cpp_headers/bit_array.o 00:36:10.316 CC test/event/reactor_perf/reactor_perf.o 00:36:10.316 CC test/nvme/aer/aer.o 00:36:10.316 CC test/event/app_repeat/app_repeat.o 00:36:10.316 CC app/vhost/vhost.o 00:36:10.316 LINK reactor 00:36:10.316 LINK rpc_client_test 00:36:10.316 CXX test/cpp_headers/bit_pool.o 00:36:10.316 LINK reactor_perf 00:36:10.574 LINK app_repeat 00:36:10.574 LINK mem_callbacks 00:36:10.574 LINK vhost 00:36:10.574 LINK aer 00:36:10.574 CXX test/cpp_headers/blob_bdev.o 00:36:10.574 CC examples/blob/hello_world/hello_blob.o 00:36:10.574 CC examples/blob/cli/blobcli.o 00:36:10.574 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:36:10.832 CC examples/nvme/hello_world/hello_world.o 00:36:10.832 CC test/event/scheduler/scheduler.o 00:36:10.832 CC examples/bdev/hello_world/hello_bdev.o 00:36:10.832 CXX test/cpp_headers/blobfs_bdev.o 00:36:10.832 CC test/nvme/reset/reset.o 00:36:10.832 CC test/nvme/sgl/sgl.o 00:36:10.832 LINK hello_blob 00:36:10.832 LINK env_dpdk_post_init 00:36:11.091 LINK hello_world 00:36:11.091 CXX test/cpp_headers/blobfs.o 00:36:11.091 LINK scheduler 00:36:11.091 LINK iscsi_fuzz 00:36:11.091 LINK hello_bdev 00:36:11.091 CXX test/cpp_headers/blob.o 00:36:11.091 LINK reset 00:36:11.091 LINK sgl 00:36:11.091 CC test/env/memory/memory_ut.o 00:36:11.091 LINK blobcli 00:36:11.349 CXX test/cpp_headers/conf.o 00:36:11.349 CC examples/nvme/reconnect/reconnect.o 00:36:11.349 CC examples/nvme/nvme_manage/nvme_manage.o 00:36:11.349 CC examples/nvme/arbitration/arbitration.o 00:36:11.349 CC examples/nvme/hotplug/hotplug.o 00:36:11.349 CC test/env/pci/pci_ut.o 00:36:11.349 CC test/nvme/e2edp/nvme_dp.o 00:36:11.349 CC examples/bdev/bdevperf/bdevperf.o 00:36:11.349 CXX test/cpp_headers/config.o 00:36:11.349 CXX test/cpp_headers/cpuset.o 00:36:11.606 CC test/accel/dif/dif.o 00:36:11.606 LINK hotplug 00:36:11.606 LINK reconnect 00:36:11.606 CXX test/cpp_headers/crc16.o 00:36:11.606 LINK nvme_dp 00:36:11.606 LINK arbitration 00:36:11.864 LINK pci_ut 00:36:11.864 LINK nvme_manage 00:36:11.864 CXX test/cpp_headers/crc32.o 00:36:11.864 CC examples/nvme/cmb_copy/cmb_copy.o 00:36:11.864 CC test/nvme/overhead/overhead.o 00:36:11.864 CC examples/nvme/abort/abort.o 00:36:11.864 CXX test/cpp_headers/crc64.o 00:36:12.122 CC test/blobfs/mkfs/mkfs.o 00:36:12.122 LINK cmb_copy 00:36:12.122 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:36:12.122 CXX test/cpp_headers/dif.o 00:36:12.122 LINK overhead 00:36:12.122 CC test/lvol/esnap/esnap.o 00:36:12.122 LINK mkfs 00:36:12.122 LINK bdevperf 00:36:12.380 LINK pmr_persistence 00:36:12.380 LINK dif 00:36:12.380 LINK abort 00:36:12.380 CXX test/cpp_headers/dma.o 00:36:12.380 CC test/nvme/err_injection/err_injection.o 00:36:12.380 LINK memory_ut 00:36:12.380 CXX test/cpp_headers/endian.o 00:36:12.380 CC test/nvme/startup/startup.o 00:36:12.380 CXX test/cpp_headers/env_dpdk.o 00:36:12.637 LINK err_injection 00:36:12.637 CC test/nvme/simple_copy/simple_copy.o 00:36:12.637 CC test/nvme/reserve/reserve.o 00:36:12.637 CXX test/cpp_headers/env.o 00:36:12.637 CXX test/cpp_headers/event.o 00:36:12.637 LINK startup 00:36:12.637 CC test/nvme/connect_stress/connect_stress.o 00:36:12.637 CC test/bdev/bdevio/bdevio.o 00:36:12.637 CC examples/nvmf/nvmf/nvmf.o 00:36:12.895 LINK reserve 00:36:12.895 LINK simple_copy 00:36:12.895 CC test/nvme/boot_partition/boot_partition.o 00:36:12.895 CXX test/cpp_headers/fd_group.o 00:36:12.895 LINK connect_stress 00:36:12.895 CC test/nvme/compliance/nvme_compliance.o 00:36:12.895 CC test/nvme/fused_ordering/fused_ordering.o 00:36:12.895 LINK boot_partition 00:36:13.152 CC test/nvme/doorbell_aers/doorbell_aers.o 00:36:13.152 CXX test/cpp_headers/fd.o 00:36:13.152 CC test/nvme/fdp/fdp.o 00:36:13.152 CXX test/cpp_headers/file.o 00:36:13.152 LINK nvmf 00:36:13.152 LINK bdevio 00:36:13.152 LINK fused_ordering 00:36:13.152 CXX test/cpp_headers/fsdev.o 00:36:13.152 CXX test/cpp_headers/fsdev_module.o 00:36:13.152 LINK doorbell_aers 00:36:13.152 CXX test/cpp_headers/ftl.o 00:36:13.152 LINK nvme_compliance 00:36:13.410 CC test/nvme/cuse/cuse.o 00:36:13.410 CXX test/cpp_headers/fuse_dispatcher.o 00:36:13.410 CXX test/cpp_headers/gpt_spec.o 00:36:13.410 CXX test/cpp_headers/hexlify.o 00:36:13.410 LINK fdp 00:36:13.410 CXX test/cpp_headers/histogram_data.o 00:36:13.410 CXX test/cpp_headers/idxd.o 00:36:13.410 CXX test/cpp_headers/idxd_spec.o 00:36:13.410 CXX test/cpp_headers/init.o 00:36:13.410 CXX test/cpp_headers/ioat.o 00:36:13.410 CXX test/cpp_headers/ioat_spec.o 00:36:13.410 CXX test/cpp_headers/iscsi_spec.o 00:36:13.667 CXX test/cpp_headers/json.o 00:36:13.667 CXX test/cpp_headers/jsonrpc.o 00:36:13.667 CXX test/cpp_headers/keyring.o 00:36:13.667 CXX test/cpp_headers/keyring_module.o 00:36:13.667 CXX test/cpp_headers/likely.o 00:36:13.667 CXX test/cpp_headers/lvol.o 00:36:13.667 CXX test/cpp_headers/log.o 00:36:13.667 CXX test/cpp_headers/md5.o 00:36:13.667 CXX test/cpp_headers/memory.o 00:36:13.667 CXX test/cpp_headers/mmio.o 00:36:13.667 CXX test/cpp_headers/nbd.o 00:36:13.926 CXX test/cpp_headers/net.o 00:36:13.926 CXX test/cpp_headers/notify.o 00:36:13.926 CXX test/cpp_headers/nvme.o 00:36:13.926 CXX test/cpp_headers/nvme_intel.o 00:36:13.926 CXX test/cpp_headers/nvme_ocssd.o 00:36:13.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:36:13.926 CXX test/cpp_headers/nvme_spec.o 00:36:13.926 CXX test/cpp_headers/nvme_zns.o 00:36:13.926 CXX test/cpp_headers/nvmf_cmd.o 00:36:13.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:36:13.926 CXX test/cpp_headers/nvmf.o 00:36:13.926 CXX test/cpp_headers/nvmf_spec.o 00:36:14.184 CXX test/cpp_headers/nvmf_transport.o 00:36:14.184 CXX test/cpp_headers/opal.o 00:36:14.184 CXX test/cpp_headers/opal_spec.o 00:36:14.184 CXX test/cpp_headers/pci_ids.o 00:36:14.184 CXX test/cpp_headers/pipe.o 00:36:14.184 CXX test/cpp_headers/queue.o 00:36:14.184 CXX test/cpp_headers/reduce.o 00:36:14.184 CXX test/cpp_headers/rpc.o 00:36:14.184 CXX test/cpp_headers/scheduler.o 00:36:14.184 CXX test/cpp_headers/scsi.o 00:36:14.184 CXX test/cpp_headers/scsi_spec.o 00:36:14.184 CXX test/cpp_headers/sock.o 00:36:14.442 CXX test/cpp_headers/stdinc.o 00:36:14.442 CXX test/cpp_headers/string.o 00:36:14.442 CXX test/cpp_headers/thread.o 00:36:14.442 CXX test/cpp_headers/trace.o 00:36:14.442 CXX test/cpp_headers/trace_parser.o 00:36:14.442 CXX test/cpp_headers/tree.o 00:36:14.442 CXX test/cpp_headers/ublk.o 00:36:14.442 CXX test/cpp_headers/util.o 00:36:14.442 CXX test/cpp_headers/uuid.o 00:36:14.442 CXX test/cpp_headers/version.o 00:36:14.442 CXX test/cpp_headers/vfio_user_pci.o 00:36:14.442 CXX test/cpp_headers/vfio_user_spec.o 00:36:14.700 CXX test/cpp_headers/vhost.o 00:36:14.700 CXX test/cpp_headers/vmd.o 00:36:14.700 CXX test/cpp_headers/xor.o 00:36:14.700 CXX test/cpp_headers/zipf.o 00:36:14.700 LINK cuse 00:36:17.234 LINK esnap 00:36:17.492 00:36:17.492 real 1m28.044s 00:36:17.492 user 8m13.477s 00:36:17.492 sys 1m31.946s 00:36:17.492 05:27:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:36:17.492 05:27:04 make -- common/autotest_common.sh@10 -- $ set +x 00:36:17.492 ************************************ 00:36:17.492 END TEST make 00:36:17.492 ************************************ 00:36:17.751 05:27:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:36:17.751 05:27:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:17.751 05:27:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:17.751 05:27:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:17.751 05:27:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:36:17.751 05:27:04 -- pm/common@44 -- $ pid=5301 00:36:17.751 05:27:04 -- pm/common@50 -- $ kill -TERM 5301 00:36:17.751 05:27:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:17.751 05:27:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:36:17.751 05:27:04 -- pm/common@44 -- $ pid=5302 00:36:17.751 05:27:04 -- pm/common@50 -- $ kill -TERM 5302 00:36:17.751 05:27:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:36:17.751 05:27:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:36:17.751 05:27:04 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:17.751 05:27:04 -- common/autotest_common.sh@1693 -- # lcov --version 00:36:17.751 05:27:04 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:17.751 05:27:04 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:17.751 05:27:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.751 05:27:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.751 05:27:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.751 05:27:04 -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.751 05:27:04 -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.751 05:27:04 -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.751 05:27:04 -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.751 05:27:04 -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.751 05:27:04 -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.751 05:27:04 -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.751 05:27:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.751 05:27:04 -- scripts/common.sh@344 -- # case "$op" in 00:36:17.751 05:27:04 -- scripts/common.sh@345 -- # : 1 00:36:17.751 05:27:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.751 05:27:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.751 05:27:04 -- scripts/common.sh@365 -- # decimal 1 00:36:17.751 05:27:04 -- scripts/common.sh@353 -- # local d=1 00:36:17.751 05:27:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.751 05:27:04 -- scripts/common.sh@355 -- # echo 1 00:36:17.751 05:27:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.751 05:27:04 -- scripts/common.sh@366 -- # decimal 2 00:36:17.751 05:27:04 -- scripts/common.sh@353 -- # local d=2 00:36:17.751 05:27:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.751 05:27:04 -- scripts/common.sh@355 -- # echo 2 00:36:17.751 05:27:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.751 05:27:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.751 05:27:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.751 05:27:04 -- scripts/common.sh@368 -- # return 0 00:36:17.751 05:27:04 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.751 05:27:04 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.751 --rc genhtml_branch_coverage=1 00:36:17.751 --rc genhtml_function_coverage=1 00:36:17.751 --rc genhtml_legend=1 00:36:17.751 --rc geninfo_all_blocks=1 00:36:17.751 --rc geninfo_unexecuted_blocks=1 00:36:17.751 00:36:17.751 ' 00:36:17.751 05:27:04 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.751 --rc genhtml_branch_coverage=1 00:36:17.751 --rc genhtml_function_coverage=1 00:36:17.751 --rc genhtml_legend=1 00:36:17.751 --rc geninfo_all_blocks=1 00:36:17.751 --rc geninfo_unexecuted_blocks=1 00:36:17.751 00:36:17.751 ' 00:36:17.751 05:27:04 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.751 --rc genhtml_branch_coverage=1 00:36:17.751 --rc genhtml_function_coverage=1 00:36:17.751 --rc genhtml_legend=1 00:36:17.751 --rc geninfo_all_blocks=1 00:36:17.751 --rc geninfo_unexecuted_blocks=1 00:36:17.751 00:36:17.751 ' 00:36:17.751 05:27:04 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.751 --rc genhtml_branch_coverage=1 00:36:17.751 --rc genhtml_function_coverage=1 00:36:17.751 --rc genhtml_legend=1 00:36:17.751 --rc geninfo_all_blocks=1 00:36:17.751 --rc geninfo_unexecuted_blocks=1 00:36:17.751 00:36:17.751 ' 00:36:17.751 05:27:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:17.751 05:27:04 -- nvmf/common.sh@7 -- # uname -s 00:36:17.751 05:27:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.751 05:27:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.751 05:27:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.751 05:27:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.751 05:27:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.751 05:27:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.751 05:27:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.751 05:27:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.751 05:27:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.751 05:27:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.751 05:27:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:36:17.751 05:27:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:36:17.751 05:27:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.751 05:27:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.751 05:27:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:17.751 05:27:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.751 05:27:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:17.751 05:27:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.751 05:27:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.751 05:27:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.751 05:27:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.751 05:27:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.751 05:27:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.751 05:27:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.751 05:27:04 -- paths/export.sh@5 -- # export PATH 00:36:17.751 05:27:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.751 05:27:04 -- nvmf/common.sh@51 -- # : 0 00:36:17.751 05:27:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:17.751 05:27:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:17.751 05:27:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.751 05:27:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.751 05:27:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.751 05:27:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:17.751 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:17.751 05:27:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:17.751 05:27:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:17.752 05:27:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:17.752 05:27:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:36:17.752 05:27:04 -- spdk/autotest.sh@32 -- # uname -s 00:36:17.752 05:27:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:36:17.752 05:27:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:36:17.752 05:27:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:36:17.752 05:27:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:36:17.752 05:27:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:36:17.752 05:27:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:36:18.010 05:27:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:36:18.010 05:27:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:36:18.010 05:27:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54372 00:36:18.010 05:27:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:36:18.010 05:27:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:36:18.010 05:27:04 -- pm/common@17 -- # local monitor 00:36:18.010 05:27:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:36:18.011 05:27:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:36:18.011 05:27:04 -- pm/common@25 -- # sleep 1 00:36:18.011 05:27:04 -- pm/common@21 -- # date +%s 00:36:18.011 05:27:04 -- pm/common@21 -- # date +%s 00:36:18.011 05:27:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733722024 00:36:18.011 05:27:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733722024 00:36:18.011 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733722024_collect-cpu-load.pm.log 00:36:18.011 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733722024_collect-vmstat.pm.log 00:36:18.945 05:27:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:36:18.945 05:27:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:36:18.945 05:27:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:18.945 05:27:05 -- common/autotest_common.sh@10 -- # set +x 00:36:18.945 05:27:05 -- spdk/autotest.sh@59 -- # create_test_list 00:36:18.945 05:27:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:36:18.945 05:27:05 -- common/autotest_common.sh@10 -- # set +x 00:36:18.945 05:27:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:36:18.945 05:27:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:36:18.945 05:27:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:36:18.945 05:27:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:36:18.945 05:27:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:36:18.945 05:27:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:36:18.945 05:27:05 -- common/autotest_common.sh@1457 -- # uname 00:36:18.945 05:27:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:36:18.945 05:27:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:36:18.945 05:27:05 -- common/autotest_common.sh@1477 -- # uname 00:36:18.945 05:27:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:36:18.945 05:27:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:36:18.945 05:27:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:36:18.945 lcov: LCOV version 1.15 00:36:18.945 05:27:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:36:33.849 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:36:33.849 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:36:48.733 05:27:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:36:48.733 05:27:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:48.733 05:27:35 -- common/autotest_common.sh@10 -- # set +x 00:36:48.733 05:27:35 -- spdk/autotest.sh@78 -- # rm -f 00:36:48.733 05:27:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:48.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:48.992 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:48.992 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:48.992 05:27:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:36:48.992 05:27:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:36:48.992 05:27:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:36:48.992 05:27:35 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:36:48.992 05:27:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:36:48.992 05:27:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:36:48.992 05:27:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:48.992 05:27:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:36:48.992 05:27:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:36:48.992 05:27:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:36:48.992 05:27:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:36:48.992 05:27:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:36:48.992 05:27:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:36:48.992 05:27:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:36:48.992 05:27:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:36:48.992 05:27:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:36:48.992 05:27:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:36:48.992 05:27:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:48.992 05:27:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:36:48.992 05:27:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:36:48.992 05:27:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:36:48.992 05:27:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:36:48.992 05:27:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:36:48.992 05:27:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:36:48.992 No valid GPT data, bailing 00:36:48.992 05:27:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:48.992 05:27:35 -- scripts/common.sh@394 -- # pt= 00:36:48.992 05:27:35 -- scripts/common.sh@395 -- # return 1 00:36:48.993 05:27:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:36:48.993 1+0 records in 00:36:48.993 1+0 records out 00:36:48.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477169 s, 220 MB/s 00:36:48.993 05:27:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:36:48.993 05:27:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:36:48.993 05:27:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:36:48.993 05:27:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:36:48.993 05:27:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:36:48.993 No valid GPT data, bailing 00:36:49.252 05:27:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:49.252 05:27:35 -- scripts/common.sh@394 -- # pt= 00:36:49.252 05:27:35 -- scripts/common.sh@395 -- # return 1 00:36:49.252 05:27:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:36:49.252 1+0 records in 00:36:49.252 1+0 records out 00:36:49.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362613 s, 289 MB/s 00:36:49.252 05:27:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:36:49.252 05:27:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:36:49.252 05:27:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:36:49.252 05:27:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:36:49.252 05:27:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:36:49.252 No valid GPT data, bailing 00:36:49.252 05:27:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:36:49.252 05:27:36 -- scripts/common.sh@394 -- # pt= 00:36:49.252 05:27:36 -- scripts/common.sh@395 -- # return 1 00:36:49.252 05:27:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:36:49.252 1+0 records in 00:36:49.252 1+0 records out 00:36:49.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475703 s, 220 MB/s 00:36:49.252 05:27:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:36:49.252 05:27:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:36:49.252 05:27:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:36:49.252 05:27:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:36:49.252 05:27:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:36:49.252 No valid GPT data, bailing 00:36:49.252 05:27:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:36:49.252 05:27:36 -- scripts/common.sh@394 -- # pt= 00:36:49.252 05:27:36 -- scripts/common.sh@395 -- # return 1 00:36:49.252 05:27:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:36:49.252 1+0 records in 00:36:49.252 1+0 records out 00:36:49.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380392 s, 276 MB/s 00:36:49.252 05:27:36 -- spdk/autotest.sh@105 -- # sync 00:36:49.252 05:27:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:36:49.252 05:27:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:36:49.252 05:27:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:36:51.782 05:27:38 -- spdk/autotest.sh@111 -- # uname -s 00:36:51.782 05:27:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:36:51.782 05:27:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:36:51.782 05:27:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:36:52.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:52.041 Hugepages 00:36:52.041 node hugesize free / total 00:36:52.041 node0 1048576kB 0 / 0 00:36:52.041 node0 2048kB 0 / 0 00:36:52.041 00:36:52.041 Type BDF Vendor Device NUMA Driver Device Block devices 00:36:52.041 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:36:52.041 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:36:52.298 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:36:52.299 05:27:39 -- spdk/autotest.sh@117 -- # uname -s 00:36:52.299 05:27:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:36:52.299 05:27:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:36:52.299 05:27:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:52.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:52.879 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:53.149 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:53.149 05:27:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:36:54.085 05:27:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:36:54.085 05:27:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:36:54.085 05:27:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:36:54.085 05:27:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:36:54.085 05:27:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:54.085 05:27:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:36:54.085 05:27:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:54.085 05:27:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:54.085 05:27:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:54.085 05:27:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:36:54.085 05:27:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:54.085 05:27:41 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:54.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:54.652 Waiting for block devices as requested 00:36:54.652 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:54.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:54.652 05:27:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:36:54.652 05:27:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:36:54.652 05:27:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:36:54.652 05:27:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:36:54.652 05:27:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:36:54.652 05:27:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:36:54.652 05:27:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:36:54.652 05:27:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:36:54.652 05:27:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:36:54.652 05:27:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:36:54.652 05:27:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:36:54.652 05:27:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:36:54.652 05:27:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:36:54.652 05:27:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:36:54.652 05:27:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:36:54.652 05:27:41 -- common/autotest_common.sh@1543 -- # continue 00:36:54.652 05:27:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:36:54.911 05:27:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:36:54.911 05:27:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:36:54.911 05:27:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:36:54.911 05:27:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:36:54.911 05:27:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:36:54.911 05:27:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:36:54.911 05:27:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:36:54.911 05:27:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:36:54.911 05:27:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:36:54.911 05:27:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:36:54.911 05:27:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:36:54.911 05:27:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:36:54.911 05:27:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:36:54.911 05:27:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:36:54.911 05:27:41 -- common/autotest_common.sh@1543 -- # continue 00:36:54.911 05:27:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:36:54.911 05:27:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.911 05:27:41 -- common/autotest_common.sh@10 -- # set +x 00:36:54.911 05:27:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:36:54.911 05:27:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:54.911 05:27:41 -- common/autotest_common.sh@10 -- # set +x 00:36:54.911 05:27:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:55.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:55.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:55.736 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:55.736 05:27:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:36:55.736 05:27:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:55.736 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:36:55.736 05:27:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:36:55.736 05:27:42 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:36:55.736 05:27:42 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:36:55.736 05:27:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:36:55.736 05:27:42 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:36:55.736 05:27:42 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:36:55.736 05:27:42 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:36:55.736 05:27:42 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:36:55.737 05:27:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:55.737 05:27:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:36:55.737 05:27:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:55.737 05:27:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:55.737 05:27:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:55.737 05:27:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:36:55.737 05:27:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:55.737 05:27:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:36:55.737 05:27:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:36:55.737 05:27:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:36:55.737 05:27:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:36:55.737 05:27:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:36:55.737 05:27:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:36:55.737 05:27:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:36:55.737 05:27:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:36:55.737 05:27:42 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:36:55.737 05:27:42 -- common/autotest_common.sh@1572 -- # return 0 00:36:55.737 05:27:42 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:36:55.737 05:27:42 -- common/autotest_common.sh@1580 -- # return 0 00:36:55.737 05:27:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:36:55.737 05:27:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:36:55.737 05:27:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:36:55.737 05:27:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:36:55.737 05:27:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:36:55.737 05:27:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:55.737 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:36:55.737 05:27:42 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:36:55.737 05:27:42 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:36:55.737 05:27:42 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:36:55.737 05:27:42 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:36:55.737 05:27:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:55.737 05:27:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.737 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:36:55.737 ************************************ 00:36:55.737 START TEST env 00:36:55.737 ************************************ 00:36:55.737 05:27:42 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:36:55.996 * Looking for test storage... 00:36:55.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1693 -- # lcov --version 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:55.996 05:27:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:55.996 05:27:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:55.996 05:27:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:55.996 05:27:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:36:55.996 05:27:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:36:55.996 05:27:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:36:55.996 05:27:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:36:55.996 05:27:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:36:55.996 05:27:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:36:55.996 05:27:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:36:55.996 05:27:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:55.996 05:27:42 env -- scripts/common.sh@344 -- # case "$op" in 00:36:55.996 05:27:42 env -- scripts/common.sh@345 -- # : 1 00:36:55.996 05:27:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:55.996 05:27:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:55.996 05:27:42 env -- scripts/common.sh@365 -- # decimal 1 00:36:55.996 05:27:42 env -- scripts/common.sh@353 -- # local d=1 00:36:55.996 05:27:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:55.996 05:27:42 env -- scripts/common.sh@355 -- # echo 1 00:36:55.996 05:27:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:36:55.996 05:27:42 env -- scripts/common.sh@366 -- # decimal 2 00:36:55.996 05:27:42 env -- scripts/common.sh@353 -- # local d=2 00:36:55.996 05:27:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:55.996 05:27:42 env -- scripts/common.sh@355 -- # echo 2 00:36:55.996 05:27:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:36:55.996 05:27:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:55.996 05:27:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:55.996 05:27:42 env -- scripts/common.sh@368 -- # return 0 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:55.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.996 --rc genhtml_branch_coverage=1 00:36:55.996 --rc genhtml_function_coverage=1 00:36:55.996 --rc genhtml_legend=1 00:36:55.996 --rc geninfo_all_blocks=1 00:36:55.996 --rc geninfo_unexecuted_blocks=1 00:36:55.996 00:36:55.996 ' 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:55.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.996 --rc genhtml_branch_coverage=1 00:36:55.996 --rc genhtml_function_coverage=1 00:36:55.996 --rc genhtml_legend=1 00:36:55.996 --rc geninfo_all_blocks=1 00:36:55.996 --rc geninfo_unexecuted_blocks=1 00:36:55.996 00:36:55.996 ' 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:55.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.996 --rc genhtml_branch_coverage=1 00:36:55.996 --rc genhtml_function_coverage=1 00:36:55.996 --rc genhtml_legend=1 00:36:55.996 --rc geninfo_all_blocks=1 00:36:55.996 --rc geninfo_unexecuted_blocks=1 00:36:55.996 00:36:55.996 ' 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:55.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.996 --rc genhtml_branch_coverage=1 00:36:55.996 --rc genhtml_function_coverage=1 00:36:55.996 --rc genhtml_legend=1 00:36:55.996 --rc geninfo_all_blocks=1 00:36:55.996 --rc geninfo_unexecuted_blocks=1 00:36:55.996 00:36:55.996 ' 00:36:55.996 05:27:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:55.996 05:27:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.996 05:27:42 env -- common/autotest_common.sh@10 -- # set +x 00:36:55.996 ************************************ 00:36:55.996 START TEST env_memory 00:36:55.996 ************************************ 00:36:55.996 05:27:42 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:36:55.996 00:36:55.996 00:36:55.996 CUnit - A unit testing framework for C - Version 2.1-3 00:36:55.996 http://cunit.sourceforge.net/ 00:36:55.996 00:36:55.996 00:36:55.996 Suite: mem_map_2mb 00:36:55.996 Test: alloc and free memory map ...[2024-12-09 05:27:42.890687] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:36:55.996 passed 00:36:55.996 Test: mem map translation ...[2024-12-09 05:27:42.923706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:36:55.996 [2024-12-09 05:27:42.923801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:36:55.996 [2024-12-09 05:27:42.923887] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:36:55.996 [2024-12-09 05:27:42.923898] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:36:56.255 passed 00:36:56.255 Test: mem map registration ...[2024-12-09 05:27:42.996546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:36:56.255 [2024-12-09 05:27:42.996631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:36:56.255 passed 00:36:56.255 Test: mem map adjacent registrations ...passed 00:36:56.255 Suite: mem_map_4kb 00:36:56.255 Test: alloc and free memory map ...[2024-12-09 05:27:43.185616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:36:56.255 passed 00:36:56.513 Test: mem map translation ...[2024-12-09 05:27:43.226458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:36:56.513 [2024-12-09 05:27:43.226513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:36:56.514 [2024-12-09 05:27:43.256468] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:36:56.514 [2024-12-09 05:27:43.256587] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:36:56.514 passed 00:36:56.514 Test: mem map registration ...[2024-12-09 05:27:43.345726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:36:56.514 [2024-12-09 05:27:43.345800] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:36:56.514 passed 00:36:56.514 Test: mem map adjacent registrations ...passed 00:36:56.514 00:36:56.514 Run Summary: Type Total Ran Passed Failed Inactive 00:36:56.514 suites 2 2 n/a 0 0 00:36:56.514 tests 8 8 8 0 0 00:36:56.514 asserts 304 304 304 0 n/a 00:36:56.514 00:36:56.514 Elapsed time = 0.596 seconds 00:36:56.514 00:36:56.514 real 0m0.613s 00:36:56.514 user 0m0.589s 00:36:56.514 sys 0m0.018s 00:36:56.514 05:27:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.514 05:27:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:36:56.514 ************************************ 00:36:56.514 END TEST env_memory 00:36:56.514 ************************************ 00:36:56.773 05:27:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:36:56.773 05:27:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:56.773 05:27:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.773 05:27:43 env -- common/autotest_common.sh@10 -- # set +x 00:36:56.773 ************************************ 00:36:56.773 START TEST env_vtophys 00:36:56.773 ************************************ 00:36:56.773 05:27:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:36:56.773 EAL: lib.eal log level changed from notice to debug 00:36:56.773 EAL: Detected lcore 0 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 1 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 2 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 3 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 4 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 5 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 6 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 7 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 8 as core 0 on socket 0 00:36:56.773 EAL: Detected lcore 9 as core 0 on socket 0 00:36:56.773 EAL: Maximum logical cores by configuration: 128 00:36:56.773 EAL: Detected CPU lcores: 10 00:36:56.773 EAL: Detected NUMA nodes: 1 00:36:56.773 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:36:56.773 EAL: Detected shared linkage of DPDK 00:36:56.773 EAL: No shared files mode enabled, IPC will be disabled 00:36:56.773 EAL: Selected IOVA mode 'PA' 00:36:56.773 EAL: Probing VFIO support... 00:36:56.773 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:36:56.773 EAL: VFIO modules not loaded, skipping VFIO support... 00:36:56.773 EAL: Ask a virtual area of 0x2e000 bytes 00:36:56.773 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:36:56.773 EAL: Setting up physically contiguous memory... 00:36:56.773 EAL: Setting maximum number of open files to 524288 00:36:56.773 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:36:56.773 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:36:56.773 EAL: Ask a virtual area of 0x61000 bytes 00:36:56.773 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:36:56.773 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:36:56.773 EAL: Ask a virtual area of 0x400000000 bytes 00:36:56.773 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:36:56.773 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:36:56.773 EAL: Ask a virtual area of 0x61000 bytes 00:36:56.773 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:36:56.773 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:36:56.773 EAL: Ask a virtual area of 0x400000000 bytes 00:36:56.773 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:36:56.773 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:36:56.773 EAL: Ask a virtual area of 0x61000 bytes 00:36:56.773 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:36:56.773 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:36:56.773 EAL: Ask a virtual area of 0x400000000 bytes 00:36:56.773 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:36:56.773 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:36:56.773 EAL: Ask a virtual area of 0x61000 bytes 00:36:56.773 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:36:56.773 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:36:56.773 EAL: Ask a virtual area of 0x400000000 bytes 00:36:56.773 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:36:56.773 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:36:56.773 EAL: Hugepages will be freed exactly as allocated. 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: TSC frequency is ~2200000 KHz 00:36:56.773 EAL: Main lcore 0 is ready (tid=7fd9575fda00;cpuset=[0]) 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 0 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 2MB 00:36:56.773 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:36:56.773 EAL: No PCI address specified using 'addr=' in: bus=pci 00:36:56.773 EAL: Mem event callback 'spdk:(nil)' registered 00:36:56.773 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:36:56.773 00:36:56.773 00:36:56.773 CUnit - A unit testing framework for C - Version 2.1-3 00:36:56.773 http://cunit.sourceforge.net/ 00:36:56.773 00:36:56.773 00:36:56.773 Suite: components_suite 00:36:56.773 Test: vtophys_malloc_test ...passed 00:36:56.773 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 4MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was shrunk by 4MB 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 6MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was shrunk by 6MB 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 10MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was shrunk by 10MB 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 18MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was shrunk by 18MB 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 34MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was shrunk by 34MB 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 66MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was shrunk by 66MB 00:36:56.773 EAL: Trying to obtain current memory policy. 00:36:56.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:56.773 EAL: Restoring previous memory policy: 4 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:56.773 EAL: request: mp_malloc_sync 00:36:56.773 EAL: No shared files mode enabled, IPC is disabled 00:36:56.773 EAL: Heap on socket 0 was expanded by 130MB 00:36:56.773 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.032 EAL: request: mp_malloc_sync 00:36:57.032 EAL: No shared files mode enabled, IPC is disabled 00:36:57.032 EAL: Heap on socket 0 was shrunk by 130MB 00:36:57.032 EAL: Trying to obtain current memory policy. 00:36:57.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:57.032 EAL: Restoring previous memory policy: 4 00:36:57.032 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.032 EAL: request: mp_malloc_sync 00:36:57.032 EAL: No shared files mode enabled, IPC is disabled 00:36:57.032 EAL: Heap on socket 0 was expanded by 258MB 00:36:57.032 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.032 EAL: request: mp_malloc_sync 00:36:57.032 EAL: No shared files mode enabled, IPC is disabled 00:36:57.032 EAL: Heap on socket 0 was shrunk by 258MB 00:36:57.032 EAL: Trying to obtain current memory policy. 00:36:57.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:57.032 EAL: Restoring previous memory policy: 4 00:36:57.032 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.032 EAL: request: mp_malloc_sync 00:36:57.032 EAL: No shared files mode enabled, IPC is disabled 00:36:57.032 EAL: Heap on socket 0 was expanded by 514MB 00:36:57.032 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.291 EAL: request: mp_malloc_sync 00:36:57.291 EAL: No shared files mode enabled, IPC is disabled 00:36:57.291 EAL: Heap on socket 0 was shrunk by 514MB 00:36:57.291 EAL: Trying to obtain current memory policy. 00:36:57.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:36:57.291 EAL: Restoring previous memory policy: 4 00:36:57.291 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.291 EAL: request: mp_malloc_sync 00:36:57.291 EAL: No shared files mode enabled, IPC is disabled 00:36:57.291 EAL: Heap on socket 0 was expanded by 1026MB 00:36:57.291 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.552 passed 00:36:57.552 00:36:57.552 Run Summary: Type Total Ran Passed Failed Inactive 00:36:57.552 suites 1 1 n/a 0 0 00:36:57.552 tests 2 2 2 0 0 00:36:57.552 asserts 5470 5470 5470 0 n/a 00:36:57.552 00:36:57.552 Elapsed time = 0.648 seconds 00:36:57.552 EAL: request: mp_malloc_sync 00:36:57.552 EAL: No shared files mode enabled, IPC is disabled 00:36:57.552 EAL: Heap on socket 0 was shrunk by 1026MB 00:36:57.552 EAL: Calling mem event callback 'spdk:(nil)' 00:36:57.552 EAL: request: mp_malloc_sync 00:36:57.552 EAL: No shared files mode enabled, IPC is disabled 00:36:57.552 EAL: Heap on socket 0 was shrunk by 2MB 00:36:57.552 EAL: No shared files mode enabled, IPC is disabled 00:36:57.552 EAL: No shared files mode enabled, IPC is disabled 00:36:57.552 EAL: No shared files mode enabled, IPC is disabled 00:36:57.552 00:36:57.552 real 0m0.857s 00:36:57.552 user 0m0.450s 00:36:57.552 sys 0m0.275s 00:36:57.552 05:27:44 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.552 05:27:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:36:57.552 ************************************ 00:36:57.552 END TEST env_vtophys 00:36:57.552 ************************************ 00:36:57.552 05:27:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:36:57.552 05:27:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:57.552 05:27:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.552 05:27:44 env -- common/autotest_common.sh@10 -- # set +x 00:36:57.552 ************************************ 00:36:57.552 START TEST env_pci 00:36:57.552 ************************************ 00:36:57.552 05:27:44 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:36:57.552 00:36:57.552 00:36:57.552 CUnit - A unit testing framework for C - Version 2.1-3 00:36:57.552 http://cunit.sourceforge.net/ 00:36:57.552 00:36:57.552 00:36:57.552 Suite: pci 00:36:57.552 Test: pci_hook ...[2024-12-09 05:27:44.430857] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56593 has claimed it 00:36:57.552 passed 00:36:57.552 00:36:57.552 Run Summary: Type Total Ran Passed Failed Inactive 00:36:57.552 suites 1 1 n/a 0 0 00:36:57.552 tests 1 1 1 0 0 00:36:57.552 asserts 25 25 25 0 n/a 00:36:57.552 00:36:57.552 Elapsed time = 0.002 seconds 00:36:57.552 EAL: Cannot find device (10000:00:01.0) 00:36:57.552 EAL: Failed to attach device on primary process 00:36:57.552 00:36:57.552 real 0m0.019s 00:36:57.552 user 0m0.006s 00:36:57.552 sys 0m0.011s 00:36:57.552 05:27:44 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.552 05:27:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:36:57.552 ************************************ 00:36:57.552 END TEST env_pci 00:36:57.552 ************************************ 00:36:57.552 05:27:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:36:57.552 05:27:44 env -- env/env.sh@15 -- # uname 00:36:57.552 05:27:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:36:57.552 05:27:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:36:57.552 05:27:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:36:57.552 05:27:44 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:57.552 05:27:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.552 05:27:44 env -- common/autotest_common.sh@10 -- # set +x 00:36:57.812 ************************************ 00:36:57.812 START TEST env_dpdk_post_init 00:36:57.812 ************************************ 00:36:57.812 05:27:44 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:36:57.812 EAL: Detected CPU lcores: 10 00:36:57.812 EAL: Detected NUMA nodes: 1 00:36:57.812 EAL: Detected shared linkage of DPDK 00:36:57.812 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:36:57.812 EAL: Selected IOVA mode 'PA' 00:36:57.813 TELEMETRY: No legacy callbacks, legacy socket not created 00:36:57.813 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:36:57.813 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:36:57.813 Starting DPDK initialization... 00:36:57.813 Starting SPDK post initialization... 00:36:57.813 SPDK NVMe probe 00:36:57.813 Attaching to 0000:00:10.0 00:36:57.813 Attaching to 0000:00:11.0 00:36:57.813 Attached to 0000:00:10.0 00:36:57.813 Attached to 0000:00:11.0 00:36:57.813 Cleaning up... 00:36:57.813 00:36:57.813 real 0m0.221s 00:36:57.813 user 0m0.067s 00:36:57.813 sys 0m0.053s 00:36:57.813 05:27:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.813 05:27:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:36:57.813 ************************************ 00:36:57.813 END TEST env_dpdk_post_init 00:36:57.813 ************************************ 00:36:58.072 05:27:44 env -- env/env.sh@26 -- # uname 00:36:58.072 05:27:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:36:58.072 05:27:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:36:58.072 05:27:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:58.072 05:27:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:58.072 05:27:44 env -- common/autotest_common.sh@10 -- # set +x 00:36:58.072 ************************************ 00:36:58.072 START TEST env_mem_callbacks 00:36:58.072 ************************************ 00:36:58.072 05:27:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:36:58.072 EAL: Detected CPU lcores: 10 00:36:58.072 EAL: Detected NUMA nodes: 1 00:36:58.072 EAL: Detected shared linkage of DPDK 00:36:58.072 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:36:58.072 EAL: Selected IOVA mode 'PA' 00:36:58.072 TELEMETRY: No legacy callbacks, legacy socket not created 00:36:58.072 00:36:58.072 00:36:58.072 CUnit - A unit testing framework for C - Version 2.1-3 00:36:58.072 http://cunit.sourceforge.net/ 00:36:58.072 00:36:58.072 00:36:58.072 Suite: memory 00:36:58.072 Test: test ... 00:36:58.072 register 0x200000200000 2097152 00:36:58.072 malloc 3145728 00:36:58.072 register 0x200000400000 4194304 00:36:58.072 buf 0x200000500000 len 3145728 PASSED 00:36:58.072 malloc 64 00:36:58.072 buf 0x2000004fff40 len 64 PASSED 00:36:58.072 malloc 4194304 00:36:58.072 register 0x200000800000 6291456 00:36:58.072 buf 0x200000a00000 len 4194304 PASSED 00:36:58.072 free 0x200000500000 3145728 00:36:58.072 free 0x2000004fff40 64 00:36:58.072 unregister 0x200000400000 4194304 PASSED 00:36:58.072 free 0x200000a00000 4194304 00:36:58.072 unregister 0x200000800000 6291456 PASSED 00:36:58.072 malloc 8388608 00:36:58.072 register 0x200000400000 10485760 00:36:58.072 buf 0x200000600000 len 8388608 PASSED 00:36:58.072 free 0x200000600000 8388608 00:36:58.072 unregister 0x200000400000 10485760 PASSED 00:36:58.072 passed 00:36:58.072 00:36:58.072 Run Summary: Type Total Ran Passed Failed Inactive 00:36:58.072 suites 1 1 n/a 0 0 00:36:58.072 tests 1 1 1 0 0 00:36:58.072 asserts 15 15 15 0 n/a 00:36:58.072 00:36:58.072 Elapsed time = 0.008 seconds 00:36:58.072 00:36:58.072 real 0m0.146s 00:36:58.072 user 0m0.020s 00:36:58.072 sys 0m0.024s 00:36:58.072 05:27:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:58.072 05:27:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:36:58.072 ************************************ 00:36:58.072 END TEST env_mem_callbacks 00:36:58.072 ************************************ 00:36:58.072 00:36:58.072 real 0m2.352s 00:36:58.072 user 0m1.342s 00:36:58.072 sys 0m0.649s 00:36:58.072 05:27:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:58.072 05:27:44 env -- common/autotest_common.sh@10 -- # set +x 00:36:58.072 ************************************ 00:36:58.072 END TEST env 00:36:58.072 ************************************ 00:36:58.072 05:27:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:36:58.072 05:27:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:58.072 05:27:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:58.072 05:27:45 -- common/autotest_common.sh@10 -- # set +x 00:36:58.330 ************************************ 00:36:58.330 START TEST rpc 00:36:58.330 ************************************ 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:36:58.330 * Looking for test storage... 00:36:58.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:58.330 05:27:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:58.330 05:27:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:36:58.330 05:27:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:36:58.330 05:27:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:36:58.330 05:27:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:58.330 05:27:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:36:58.330 05:27:45 rpc -- scripts/common.sh@345 -- # : 1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:58.330 05:27:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:58.330 05:27:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@353 -- # local d=1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:58.330 05:27:45 rpc -- scripts/common.sh@355 -- # echo 1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:36:58.330 05:27:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@353 -- # local d=2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:58.330 05:27:45 rpc -- scripts/common.sh@355 -- # echo 2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:36:58.330 05:27:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:58.330 05:27:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:58.330 05:27:45 rpc -- scripts/common.sh@368 -- # return 0 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:58.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.330 --rc genhtml_branch_coverage=1 00:36:58.330 --rc genhtml_function_coverage=1 00:36:58.330 --rc genhtml_legend=1 00:36:58.330 --rc geninfo_all_blocks=1 00:36:58.330 --rc geninfo_unexecuted_blocks=1 00:36:58.330 00:36:58.330 ' 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:58.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.330 --rc genhtml_branch_coverage=1 00:36:58.330 --rc genhtml_function_coverage=1 00:36:58.330 --rc genhtml_legend=1 00:36:58.330 --rc geninfo_all_blocks=1 00:36:58.330 --rc geninfo_unexecuted_blocks=1 00:36:58.330 00:36:58.330 ' 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:58.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.330 --rc genhtml_branch_coverage=1 00:36:58.330 --rc genhtml_function_coverage=1 00:36:58.330 --rc genhtml_legend=1 00:36:58.330 --rc geninfo_all_blocks=1 00:36:58.330 --rc geninfo_unexecuted_blocks=1 00:36:58.330 00:36:58.330 ' 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:58.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.330 --rc genhtml_branch_coverage=1 00:36:58.330 --rc genhtml_function_coverage=1 00:36:58.330 --rc genhtml_legend=1 00:36:58.330 --rc geninfo_all_blocks=1 00:36:58.330 --rc geninfo_unexecuted_blocks=1 00:36:58.330 00:36:58.330 ' 00:36:58.330 05:27:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56716 00:36:58.330 05:27:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:36:58.330 05:27:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:36:58.330 05:27:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56716 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 56716 ']' 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.330 05:27:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:36:58.589 [2024-12-09 05:27:45.309191] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:58.589 [2024-12-09 05:27:45.309804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56716 ] 00:36:58.589 [2024-12-09 05:27:45.463477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.589 [2024-12-09 05:27:45.503884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:36:58.589 [2024-12-09 05:27:45.503958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56716' to capture a snapshot of events at runtime. 00:36:58.589 [2024-12-09 05:27:45.503983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.589 [2024-12-09 05:27:45.503994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.589 [2024-12-09 05:27:45.504002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56716 for offline analysis/debug. 00:36:58.589 [2024-12-09 05:27:45.504431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.589 [2024-12-09 05:27:45.550222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:36:58.847 05:27:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:58.847 05:27:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:36:58.847 05:27:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:36:58.847 05:27:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:36:58.847 05:27:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:36:58.847 05:27:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:36:58.847 05:27:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:58.847 05:27:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:58.847 05:27:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:36:58.847 ************************************ 00:36:58.847 START TEST rpc_integrity 00:36:58.847 ************************************ 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:58.847 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:36:58.847 { 00:36:58.847 "name": "Malloc0", 00:36:58.847 "aliases": [ 00:36:58.847 "3ed8d831-2059-4938-a000-2b24e002db7d" 00:36:58.847 ], 00:36:58.847 "product_name": "Malloc disk", 00:36:58.847 "block_size": 512, 00:36:58.847 "num_blocks": 16384, 00:36:58.847 "uuid": "3ed8d831-2059-4938-a000-2b24e002db7d", 00:36:58.847 "assigned_rate_limits": { 00:36:58.847 "rw_ios_per_sec": 0, 00:36:58.847 "rw_mbytes_per_sec": 0, 00:36:58.847 "r_mbytes_per_sec": 0, 00:36:58.847 "w_mbytes_per_sec": 0 00:36:58.847 }, 00:36:58.847 "claimed": false, 00:36:58.847 "zoned": false, 00:36:58.847 "supported_io_types": { 00:36:58.847 "read": true, 00:36:58.847 "write": true, 00:36:58.847 "unmap": true, 00:36:58.847 "flush": true, 00:36:58.847 "reset": true, 00:36:58.847 "nvme_admin": false, 00:36:58.847 "nvme_io": false, 00:36:58.847 "nvme_io_md": false, 00:36:58.847 "write_zeroes": true, 00:36:58.847 "zcopy": true, 00:36:58.847 "get_zone_info": false, 00:36:58.847 "zone_management": false, 00:36:58.847 "zone_append": false, 00:36:58.847 "compare": false, 00:36:58.847 "compare_and_write": false, 00:36:58.847 "abort": true, 00:36:58.847 "seek_hole": false, 00:36:58.847 "seek_data": false, 00:36:58.847 "copy": true, 00:36:58.847 "nvme_iov_md": false 00:36:58.847 }, 00:36:58.847 "memory_domains": [ 00:36:58.847 { 00:36:58.847 "dma_device_id": "system", 00:36:58.847 "dma_device_type": 1 00:36:58.847 }, 00:36:58.847 { 00:36:58.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.847 "dma_device_type": 2 00:36:58.847 } 00:36:58.847 ], 00:36:58.847 "driver_specific": {} 00:36:58.847 } 00:36:58.847 ]' 00:36:58.847 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.104 [2024-12-09 05:27:45.857462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:36:59.104 [2024-12-09 05:27:45.857532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:59.104 [2024-12-09 05:27:45.857551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1523050 00:36:59.104 [2024-12-09 05:27:45.857561] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:59.104 [2024-12-09 05:27:45.859235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:59.104 [2024-12-09 05:27:45.859267] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:36:59.104 Passthru0 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:36:59.104 { 00:36:59.104 "name": "Malloc0", 00:36:59.104 "aliases": [ 00:36:59.104 "3ed8d831-2059-4938-a000-2b24e002db7d" 00:36:59.104 ], 00:36:59.104 "product_name": "Malloc disk", 00:36:59.104 "block_size": 512, 00:36:59.104 "num_blocks": 16384, 00:36:59.104 "uuid": "3ed8d831-2059-4938-a000-2b24e002db7d", 00:36:59.104 "assigned_rate_limits": { 00:36:59.104 "rw_ios_per_sec": 0, 00:36:59.104 "rw_mbytes_per_sec": 0, 00:36:59.104 "r_mbytes_per_sec": 0, 00:36:59.104 "w_mbytes_per_sec": 0 00:36:59.104 }, 00:36:59.104 "claimed": true, 00:36:59.104 "claim_type": "exclusive_write", 00:36:59.104 "zoned": false, 00:36:59.104 "supported_io_types": { 00:36:59.104 "read": true, 00:36:59.104 "write": true, 00:36:59.104 "unmap": true, 00:36:59.104 "flush": true, 00:36:59.104 "reset": true, 00:36:59.104 "nvme_admin": false, 00:36:59.104 "nvme_io": false, 00:36:59.104 "nvme_io_md": false, 00:36:59.104 "write_zeroes": true, 00:36:59.104 "zcopy": true, 00:36:59.104 "get_zone_info": false, 00:36:59.104 "zone_management": false, 00:36:59.104 "zone_append": false, 00:36:59.104 "compare": false, 00:36:59.104 "compare_and_write": false, 00:36:59.104 "abort": true, 00:36:59.104 "seek_hole": false, 00:36:59.104 "seek_data": false, 00:36:59.104 "copy": true, 00:36:59.104 "nvme_iov_md": false 00:36:59.104 }, 00:36:59.104 "memory_domains": [ 00:36:59.104 { 00:36:59.104 "dma_device_id": "system", 00:36:59.104 "dma_device_type": 1 00:36:59.104 }, 00:36:59.104 { 00:36:59.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.104 "dma_device_type": 2 00:36:59.104 } 00:36:59.104 ], 00:36:59.104 "driver_specific": {} 00:36:59.104 }, 00:36:59.104 { 00:36:59.104 "name": "Passthru0", 00:36:59.104 "aliases": [ 00:36:59.104 "2df1f5a9-1f63-5879-a7cc-b466da964e3f" 00:36:59.104 ], 00:36:59.104 "product_name": "passthru", 00:36:59.104 "block_size": 512, 00:36:59.104 "num_blocks": 16384, 00:36:59.104 "uuid": "2df1f5a9-1f63-5879-a7cc-b466da964e3f", 00:36:59.104 "assigned_rate_limits": { 00:36:59.104 "rw_ios_per_sec": 0, 00:36:59.104 "rw_mbytes_per_sec": 0, 00:36:59.104 "r_mbytes_per_sec": 0, 00:36:59.104 "w_mbytes_per_sec": 0 00:36:59.104 }, 00:36:59.104 "claimed": false, 00:36:59.104 "zoned": false, 00:36:59.104 "supported_io_types": { 00:36:59.104 "read": true, 00:36:59.104 "write": true, 00:36:59.104 "unmap": true, 00:36:59.104 "flush": true, 00:36:59.104 "reset": true, 00:36:59.104 "nvme_admin": false, 00:36:59.104 "nvme_io": false, 00:36:59.104 "nvme_io_md": false, 00:36:59.104 "write_zeroes": true, 00:36:59.104 "zcopy": true, 00:36:59.104 "get_zone_info": false, 00:36:59.104 "zone_management": false, 00:36:59.104 "zone_append": false, 00:36:59.104 "compare": false, 00:36:59.104 "compare_and_write": false, 00:36:59.104 "abort": true, 00:36:59.104 "seek_hole": false, 00:36:59.104 "seek_data": false, 00:36:59.104 "copy": true, 00:36:59.104 "nvme_iov_md": false 00:36:59.104 }, 00:36:59.104 "memory_domains": [ 00:36:59.104 { 00:36:59.104 "dma_device_id": "system", 00:36:59.104 "dma_device_type": 1 00:36:59.104 }, 00:36:59.104 { 00:36:59.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.104 "dma_device_type": 2 00:36:59.104 } 00:36:59.104 ], 00:36:59.104 "driver_specific": { 00:36:59.104 "passthru": { 00:36:59.104 "name": "Passthru0", 00:36:59.104 "base_bdev_name": "Malloc0" 00:36:59.104 } 00:36:59.104 } 00:36:59.104 } 00:36:59.104 ]' 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:36:59.104 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.104 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.105 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.105 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.105 05:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.105 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:36:59.105 05:27:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:36:59.105 ************************************ 00:36:59.105 END TEST rpc_integrity 00:36:59.105 ************************************ 00:36:59.105 05:27:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:36:59.105 00:36:59.105 real 0m0.321s 00:36:59.105 user 0m0.219s 00:36:59.105 sys 0m0.035s 00:36:59.105 05:27:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.105 05:27:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.105 05:27:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:36:59.105 05:27:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:59.105 05:27:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.105 05:27:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 ************************************ 00:36:59.361 START TEST rpc_plugins 00:36:59.361 ************************************ 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:36:59.361 { 00:36:59.361 "name": "Malloc1", 00:36:59.361 "aliases": [ 00:36:59.361 "21fd3296-a6af-427e-8287-45a93e242c11" 00:36:59.361 ], 00:36:59.361 "product_name": "Malloc disk", 00:36:59.361 "block_size": 4096, 00:36:59.361 "num_blocks": 256, 00:36:59.361 "uuid": "21fd3296-a6af-427e-8287-45a93e242c11", 00:36:59.361 "assigned_rate_limits": { 00:36:59.361 "rw_ios_per_sec": 0, 00:36:59.361 "rw_mbytes_per_sec": 0, 00:36:59.361 "r_mbytes_per_sec": 0, 00:36:59.361 "w_mbytes_per_sec": 0 00:36:59.361 }, 00:36:59.361 "claimed": false, 00:36:59.361 "zoned": false, 00:36:59.361 "supported_io_types": { 00:36:59.361 "read": true, 00:36:59.361 "write": true, 00:36:59.361 "unmap": true, 00:36:59.361 "flush": true, 00:36:59.361 "reset": true, 00:36:59.361 "nvme_admin": false, 00:36:59.361 "nvme_io": false, 00:36:59.361 "nvme_io_md": false, 00:36:59.361 "write_zeroes": true, 00:36:59.361 "zcopy": true, 00:36:59.361 "get_zone_info": false, 00:36:59.361 "zone_management": false, 00:36:59.361 "zone_append": false, 00:36:59.361 "compare": false, 00:36:59.361 "compare_and_write": false, 00:36:59.361 "abort": true, 00:36:59.361 "seek_hole": false, 00:36:59.361 "seek_data": false, 00:36:59.361 "copy": true, 00:36:59.361 "nvme_iov_md": false 00:36:59.361 }, 00:36:59.361 "memory_domains": [ 00:36:59.361 { 00:36:59.361 "dma_device_id": "system", 00:36:59.361 "dma_device_type": 1 00:36:59.361 }, 00:36:59.361 { 00:36:59.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.361 "dma_device_type": 2 00:36:59.361 } 00:36:59.361 ], 00:36:59.361 "driver_specific": {} 00:36:59.361 } 00:36:59.361 ]' 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:36:59.361 ************************************ 00:36:59.361 END TEST rpc_plugins 00:36:59.361 ************************************ 00:36:59.361 05:27:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:36:59.361 00:36:59.361 real 0m0.163s 00:36:59.361 user 0m0.106s 00:36:59.361 sys 0m0.012s 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.361 05:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 05:27:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:36:59.361 05:27:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:59.361 05:27:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.361 05:27:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 ************************************ 00:36:59.361 START TEST rpc_trace_cmd_test 00:36:59.361 ************************************ 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.361 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:36:59.361 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56716", 00:36:59.361 "tpoint_group_mask": "0x8", 00:36:59.361 "iscsi_conn": { 00:36:59.361 "mask": "0x2", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "scsi": { 00:36:59.361 "mask": "0x4", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "bdev": { 00:36:59.361 "mask": "0x8", 00:36:59.361 "tpoint_mask": "0xffffffffffffffff" 00:36:59.361 }, 00:36:59.361 "nvmf_rdma": { 00:36:59.361 "mask": "0x10", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "nvmf_tcp": { 00:36:59.361 "mask": "0x20", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "ftl": { 00:36:59.361 "mask": "0x40", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "blobfs": { 00:36:59.361 "mask": "0x80", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "dsa": { 00:36:59.361 "mask": "0x200", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.361 "thread": { 00:36:59.361 "mask": "0x400", 00:36:59.361 "tpoint_mask": "0x0" 00:36:59.361 }, 00:36:59.362 "nvme_pcie": { 00:36:59.362 "mask": "0x800", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "iaa": { 00:36:59.362 "mask": "0x1000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "nvme_tcp": { 00:36:59.362 "mask": "0x2000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "bdev_nvme": { 00:36:59.362 "mask": "0x4000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "sock": { 00:36:59.362 "mask": "0x8000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "blob": { 00:36:59.362 "mask": "0x10000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "bdev_raid": { 00:36:59.362 "mask": "0x20000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 }, 00:36:59.362 "scheduler": { 00:36:59.362 "mask": "0x40000", 00:36:59.362 "tpoint_mask": "0x0" 00:36:59.362 } 00:36:59.362 }' 00:36:59.362 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:36:59.620 00:36:59.620 real 0m0.285s 00:36:59.620 user 0m0.241s 00:36:59.620 sys 0m0.033s 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.620 05:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.620 ************************************ 00:36:59.620 END TEST rpc_trace_cmd_test 00:36:59.620 ************************************ 00:36:59.878 05:27:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:36:59.878 05:27:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:36:59.878 05:27:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:36:59.878 05:27:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:59.878 05:27:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.878 05:27:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:36:59.878 ************************************ 00:36:59.878 START TEST rpc_daemon_integrity 00:36:59.878 ************************************ 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:36:59.878 { 00:36:59.878 "name": "Malloc2", 00:36:59.878 "aliases": [ 00:36:59.878 "0b2aad98-fb15-4714-8eb6-9c2bb8c26c6a" 00:36:59.878 ], 00:36:59.878 "product_name": "Malloc disk", 00:36:59.878 "block_size": 512, 00:36:59.878 "num_blocks": 16384, 00:36:59.878 "uuid": "0b2aad98-fb15-4714-8eb6-9c2bb8c26c6a", 00:36:59.878 "assigned_rate_limits": { 00:36:59.878 "rw_ios_per_sec": 0, 00:36:59.878 "rw_mbytes_per_sec": 0, 00:36:59.878 "r_mbytes_per_sec": 0, 00:36:59.878 "w_mbytes_per_sec": 0 00:36:59.878 }, 00:36:59.878 "claimed": false, 00:36:59.878 "zoned": false, 00:36:59.878 "supported_io_types": { 00:36:59.878 "read": true, 00:36:59.878 "write": true, 00:36:59.878 "unmap": true, 00:36:59.878 "flush": true, 00:36:59.878 "reset": true, 00:36:59.878 "nvme_admin": false, 00:36:59.878 "nvme_io": false, 00:36:59.878 "nvme_io_md": false, 00:36:59.878 "write_zeroes": true, 00:36:59.878 "zcopy": true, 00:36:59.878 "get_zone_info": false, 00:36:59.878 "zone_management": false, 00:36:59.878 "zone_append": false, 00:36:59.878 "compare": false, 00:36:59.878 "compare_and_write": false, 00:36:59.878 "abort": true, 00:36:59.878 "seek_hole": false, 00:36:59.878 "seek_data": false, 00:36:59.878 "copy": true, 00:36:59.878 "nvme_iov_md": false 00:36:59.878 }, 00:36:59.878 "memory_domains": [ 00:36:59.878 { 00:36:59.878 "dma_device_id": "system", 00:36:59.878 "dma_device_type": 1 00:36:59.878 }, 00:36:59.878 { 00:36:59.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.878 "dma_device_type": 2 00:36:59.878 } 00:36:59.878 ], 00:36:59.878 "driver_specific": {} 00:36:59.878 } 00:36:59.878 ]' 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.878 [2024-12-09 05:27:46.781889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:36:59.878 [2024-12-09 05:27:46.781991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:59.878 [2024-12-09 05:27:46.782010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x152e030 00:36:59.878 [2024-12-09 05:27:46.782018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:59.878 [2024-12-09 05:27:46.783546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:59.878 [2024-12-09 05:27:46.783598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:36:59.878 Passthru0 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.878 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:36:59.879 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.879 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:36:59.879 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.879 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:36:59.879 { 00:36:59.879 "name": "Malloc2", 00:36:59.879 "aliases": [ 00:36:59.879 "0b2aad98-fb15-4714-8eb6-9c2bb8c26c6a" 00:36:59.879 ], 00:36:59.879 "product_name": "Malloc disk", 00:36:59.879 "block_size": 512, 00:36:59.879 "num_blocks": 16384, 00:36:59.879 "uuid": "0b2aad98-fb15-4714-8eb6-9c2bb8c26c6a", 00:36:59.879 "assigned_rate_limits": { 00:36:59.879 "rw_ios_per_sec": 0, 00:36:59.879 "rw_mbytes_per_sec": 0, 00:36:59.879 "r_mbytes_per_sec": 0, 00:36:59.879 "w_mbytes_per_sec": 0 00:36:59.879 }, 00:36:59.879 "claimed": true, 00:36:59.879 "claim_type": "exclusive_write", 00:36:59.879 "zoned": false, 00:36:59.879 "supported_io_types": { 00:36:59.879 "read": true, 00:36:59.879 "write": true, 00:36:59.879 "unmap": true, 00:36:59.879 "flush": true, 00:36:59.879 "reset": true, 00:36:59.879 "nvme_admin": false, 00:36:59.879 "nvme_io": false, 00:36:59.879 "nvme_io_md": false, 00:36:59.879 "write_zeroes": true, 00:36:59.879 "zcopy": true, 00:36:59.879 "get_zone_info": false, 00:36:59.879 "zone_management": false, 00:36:59.879 "zone_append": false, 00:36:59.879 "compare": false, 00:36:59.879 "compare_and_write": false, 00:36:59.879 "abort": true, 00:36:59.879 "seek_hole": false, 00:36:59.879 "seek_data": false, 00:36:59.879 "copy": true, 00:36:59.879 "nvme_iov_md": false 00:36:59.879 }, 00:36:59.879 "memory_domains": [ 00:36:59.879 { 00:36:59.879 "dma_device_id": "system", 00:36:59.879 "dma_device_type": 1 00:36:59.879 }, 00:36:59.879 { 00:36:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.879 "dma_device_type": 2 00:36:59.879 } 00:36:59.879 ], 00:36:59.879 "driver_specific": {} 00:36:59.879 }, 00:36:59.879 { 00:36:59.879 "name": "Passthru0", 00:36:59.879 "aliases": [ 00:36:59.879 "b7f6fa01-ef67-5869-82dc-a11e3e5b6343" 00:36:59.879 ], 00:36:59.879 "product_name": "passthru", 00:36:59.879 "block_size": 512, 00:36:59.879 "num_blocks": 16384, 00:36:59.879 "uuid": "b7f6fa01-ef67-5869-82dc-a11e3e5b6343", 00:36:59.879 "assigned_rate_limits": { 00:36:59.879 "rw_ios_per_sec": 0, 00:36:59.879 "rw_mbytes_per_sec": 0, 00:36:59.879 "r_mbytes_per_sec": 0, 00:36:59.879 "w_mbytes_per_sec": 0 00:36:59.879 }, 00:36:59.879 "claimed": false, 00:36:59.879 "zoned": false, 00:36:59.879 "supported_io_types": { 00:36:59.879 "read": true, 00:36:59.879 "write": true, 00:36:59.879 "unmap": true, 00:36:59.879 "flush": true, 00:36:59.879 "reset": true, 00:36:59.879 "nvme_admin": false, 00:36:59.879 "nvme_io": false, 00:36:59.879 "nvme_io_md": false, 00:36:59.879 "write_zeroes": true, 00:36:59.879 "zcopy": true, 00:36:59.879 "get_zone_info": false, 00:36:59.879 "zone_management": false, 00:36:59.879 "zone_append": false, 00:36:59.879 "compare": false, 00:36:59.879 "compare_and_write": false, 00:36:59.879 "abort": true, 00:36:59.879 "seek_hole": false, 00:36:59.879 "seek_data": false, 00:36:59.879 "copy": true, 00:36:59.879 "nvme_iov_md": false 00:36:59.879 }, 00:36:59.879 "memory_domains": [ 00:36:59.879 { 00:36:59.879 "dma_device_id": "system", 00:36:59.879 "dma_device_type": 1 00:36:59.879 }, 00:36:59.879 { 00:36:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.879 "dma_device_type": 2 00:36:59.879 } 00:36:59.879 ], 00:36:59.879 "driver_specific": { 00:36:59.879 "passthru": { 00:36:59.879 "name": "Passthru0", 00:36:59.879 "base_bdev_name": "Malloc2" 00:36:59.879 } 00:36:59.879 } 00:36:59.879 } 00:36:59.879 ]' 00:36:59.879 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:00.136 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:37:00.137 ************************************ 00:37:00.137 END TEST rpc_daemon_integrity 00:37:00.137 ************************************ 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:37:00.137 00:37:00.137 real 0m0.326s 00:37:00.137 user 0m0.223s 00:37:00.137 sys 0m0.040s 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.137 05:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:00.137 05:27:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:00.137 05:27:46 rpc -- rpc/rpc.sh@84 -- # killprocess 56716 00:37:00.137 05:27:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 56716 ']' 00:37:00.137 05:27:46 rpc -- common/autotest_common.sh@958 -- # kill -0 56716 00:37:00.137 05:27:46 rpc -- common/autotest_common.sh@959 -- # uname 00:37:00.137 05:27:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.137 05:27:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56716 00:37:00.137 05:27:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:00.137 05:27:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:00.137 05:27:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56716' 00:37:00.137 killing process with pid 56716 00:37:00.137 05:27:47 rpc -- common/autotest_common.sh@973 -- # kill 56716 00:37:00.137 05:27:47 rpc -- common/autotest_common.sh@978 -- # wait 56716 00:37:00.395 00:37:00.395 real 0m2.268s 00:37:00.395 user 0m3.010s 00:37:00.395 sys 0m0.592s 00:37:00.395 05:27:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.395 ************************************ 00:37:00.395 END TEST rpc 00:37:00.395 ************************************ 00:37:00.395 05:27:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:00.395 05:27:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:37:00.654 05:27:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:00.654 05:27:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.654 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:37:00.654 ************************************ 00:37:00.654 START TEST skip_rpc 00:37:00.654 ************************************ 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:37:00.654 * Looking for test storage... 00:37:00.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.654 05:27:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.654 --rc genhtml_branch_coverage=1 00:37:00.654 --rc genhtml_function_coverage=1 00:37:00.654 --rc genhtml_legend=1 00:37:00.654 --rc geninfo_all_blocks=1 00:37:00.654 --rc geninfo_unexecuted_blocks=1 00:37:00.654 00:37:00.654 ' 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.654 --rc genhtml_branch_coverage=1 00:37:00.654 --rc genhtml_function_coverage=1 00:37:00.654 --rc genhtml_legend=1 00:37:00.654 --rc geninfo_all_blocks=1 00:37:00.654 --rc geninfo_unexecuted_blocks=1 00:37:00.654 00:37:00.654 ' 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.654 --rc genhtml_branch_coverage=1 00:37:00.654 --rc genhtml_function_coverage=1 00:37:00.654 --rc genhtml_legend=1 00:37:00.654 --rc geninfo_all_blocks=1 00:37:00.654 --rc geninfo_unexecuted_blocks=1 00:37:00.654 00:37:00.654 ' 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.654 --rc genhtml_branch_coverage=1 00:37:00.654 --rc genhtml_function_coverage=1 00:37:00.654 --rc genhtml_legend=1 00:37:00.654 --rc geninfo_all_blocks=1 00:37:00.654 --rc geninfo_unexecuted_blocks=1 00:37:00.654 00:37:00.654 ' 00:37:00.654 05:27:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:37:00.654 05:27:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:37:00.654 05:27:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.654 05:27:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:00.654 ************************************ 00:37:00.654 START TEST skip_rpc 00:37:00.654 ************************************ 00:37:00.654 05:27:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:37:00.654 05:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56909 00:37:00.654 05:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:37:00.654 05:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:00.654 05:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:37:00.913 [2024-12-09 05:27:47.633859] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:00.913 [2024-12-09 05:27:47.634169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56909 ] 00:37:00.913 [2024-12-09 05:27:47.787156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.913 [2024-12-09 05:27:47.819601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.913 [2024-12-09 05:27:47.859075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56909 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56909 ']' 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56909 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56909 00:37:06.222 killing process with pid 56909 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56909' 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56909 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56909 00:37:06.222 00:37:06.222 real 0m5.337s 00:37:06.222 user 0m5.055s 00:37:06.222 sys 0m0.186s 00:37:06.222 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:06.222 ************************************ 00:37:06.223 END TEST skip_rpc 00:37:06.223 ************************************ 00:37:06.223 05:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:06.223 05:27:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:37:06.223 05:27:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:06.223 05:27:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:06.223 05:27:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:06.223 ************************************ 00:37:06.223 START TEST skip_rpc_with_json 00:37:06.223 ************************************ 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56996 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56996 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56996 ']' 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.223 05:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:06.223 [2024-12-09 05:27:53.024170] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:06.223 [2024-12-09 05:27:53.024592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56996 ] 00:37:06.223 [2024-12-09 05:27:53.170115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.482 [2024-12-09 05:27:53.202185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.482 [2024-12-09 05:27:53.241125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:06.482 [2024-12-09 05:27:53.371563] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:37:06.482 request: 00:37:06.482 { 00:37:06.482 "trtype": "tcp", 00:37:06.482 "method": "nvmf_get_transports", 00:37:06.482 "req_id": 1 00:37:06.482 } 00:37:06.482 Got JSON-RPC error response 00:37:06.482 response: 00:37:06.482 { 00:37:06.482 "code": -19, 00:37:06.482 "message": "No such device" 00:37:06.482 } 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:06.482 [2024-12-09 05:27:53.383676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.482 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:06.747 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.747 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:37:06.747 { 00:37:06.747 "subsystems": [ 00:37:06.747 { 00:37:06.747 "subsystem": "fsdev", 00:37:06.747 "config": [ 00:37:06.747 { 00:37:06.747 "method": "fsdev_set_opts", 00:37:06.747 "params": { 00:37:06.747 "fsdev_io_pool_size": 65535, 00:37:06.747 "fsdev_io_cache_size": 256 00:37:06.747 } 00:37:06.747 } 00:37:06.747 ] 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "subsystem": "keyring", 00:37:06.747 "config": [] 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "subsystem": "iobuf", 00:37:06.747 "config": [ 00:37:06.747 { 00:37:06.747 "method": "iobuf_set_options", 00:37:06.747 "params": { 00:37:06.747 "small_pool_count": 8192, 00:37:06.747 "large_pool_count": 1024, 00:37:06.747 "small_bufsize": 8192, 00:37:06.747 "large_bufsize": 135168, 00:37:06.747 "enable_numa": false 00:37:06.747 } 00:37:06.747 } 00:37:06.747 ] 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "subsystem": "sock", 00:37:06.747 "config": [ 00:37:06.747 { 00:37:06.747 "method": "sock_set_default_impl", 00:37:06.747 "params": { 00:37:06.747 "impl_name": "uring" 00:37:06.747 } 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "method": "sock_impl_set_options", 00:37:06.747 "params": { 00:37:06.747 "impl_name": "ssl", 00:37:06.747 "recv_buf_size": 4096, 00:37:06.747 "send_buf_size": 4096, 00:37:06.747 "enable_recv_pipe": true, 00:37:06.747 "enable_quickack": false, 00:37:06.747 "enable_placement_id": 0, 00:37:06.747 "enable_zerocopy_send_server": true, 00:37:06.747 "enable_zerocopy_send_client": false, 00:37:06.747 "zerocopy_threshold": 0, 00:37:06.747 "tls_version": 0, 00:37:06.747 "enable_ktls": false 00:37:06.747 } 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "method": "sock_impl_set_options", 00:37:06.747 "params": { 00:37:06.747 "impl_name": "posix", 00:37:06.747 "recv_buf_size": 2097152, 00:37:06.747 "send_buf_size": 2097152, 00:37:06.747 "enable_recv_pipe": true, 00:37:06.747 "enable_quickack": false, 00:37:06.747 "enable_placement_id": 0, 00:37:06.747 "enable_zerocopy_send_server": true, 00:37:06.747 "enable_zerocopy_send_client": false, 00:37:06.747 "zerocopy_threshold": 0, 00:37:06.747 "tls_version": 0, 00:37:06.747 "enable_ktls": false 00:37:06.747 } 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "method": "sock_impl_set_options", 00:37:06.747 "params": { 00:37:06.747 "impl_name": "uring", 00:37:06.747 "recv_buf_size": 2097152, 00:37:06.747 "send_buf_size": 2097152, 00:37:06.747 "enable_recv_pipe": true, 00:37:06.747 "enable_quickack": false, 00:37:06.747 "enable_placement_id": 0, 00:37:06.747 "enable_zerocopy_send_server": false, 00:37:06.747 "enable_zerocopy_send_client": false, 00:37:06.747 "zerocopy_threshold": 0, 00:37:06.747 "tls_version": 0, 00:37:06.747 "enable_ktls": false 00:37:06.747 } 00:37:06.747 } 00:37:06.747 ] 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "subsystem": "vmd", 00:37:06.747 "config": [] 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "subsystem": "accel", 00:37:06.747 "config": [ 00:37:06.747 { 00:37:06.747 "method": "accel_set_options", 00:37:06.747 "params": { 00:37:06.747 "small_cache_size": 128, 00:37:06.747 "large_cache_size": 16, 00:37:06.747 "task_count": 2048, 00:37:06.747 "sequence_count": 2048, 00:37:06.747 "buf_count": 2048 00:37:06.747 } 00:37:06.747 } 00:37:06.747 ] 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "subsystem": "bdev", 00:37:06.747 "config": [ 00:37:06.747 { 00:37:06.747 "method": "bdev_set_options", 00:37:06.747 "params": { 00:37:06.747 "bdev_io_pool_size": 65535, 00:37:06.747 "bdev_io_cache_size": 256, 00:37:06.747 "bdev_auto_examine": true, 00:37:06.747 "iobuf_small_cache_size": 128, 00:37:06.747 "iobuf_large_cache_size": 16 00:37:06.747 } 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "method": "bdev_raid_set_options", 00:37:06.747 "params": { 00:37:06.747 "process_window_size_kb": 1024, 00:37:06.747 "process_max_bandwidth_mb_sec": 0 00:37:06.747 } 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "method": "bdev_iscsi_set_options", 00:37:06.747 "params": { 00:37:06.747 "timeout_sec": 30 00:37:06.747 } 00:37:06.747 }, 00:37:06.747 { 00:37:06.747 "method": "bdev_nvme_set_options", 00:37:06.747 "params": { 00:37:06.747 "action_on_timeout": "none", 00:37:06.747 "timeout_us": 0, 00:37:06.747 "timeout_admin_us": 0, 00:37:06.747 "keep_alive_timeout_ms": 10000, 00:37:06.747 "arbitration_burst": 0, 00:37:06.747 "low_priority_weight": 0, 00:37:06.747 "medium_priority_weight": 0, 00:37:06.747 "high_priority_weight": 0, 00:37:06.747 "nvme_adminq_poll_period_us": 10000, 00:37:06.747 "nvme_ioq_poll_period_us": 0, 00:37:06.747 "io_queue_requests": 0, 00:37:06.747 "delay_cmd_submit": true, 00:37:06.748 "transport_retry_count": 4, 00:37:06.748 "bdev_retry_count": 3, 00:37:06.748 "transport_ack_timeout": 0, 00:37:06.748 "ctrlr_loss_timeout_sec": 0, 00:37:06.748 "reconnect_delay_sec": 0, 00:37:06.748 "fast_io_fail_timeout_sec": 0, 00:37:06.748 "disable_auto_failback": false, 00:37:06.748 "generate_uuids": false, 00:37:06.748 "transport_tos": 0, 00:37:06.748 "nvme_error_stat": false, 00:37:06.748 "rdma_srq_size": 0, 00:37:06.748 "io_path_stat": false, 00:37:06.748 "allow_accel_sequence": false, 00:37:06.748 "rdma_max_cq_size": 0, 00:37:06.748 "rdma_cm_event_timeout_ms": 0, 00:37:06.748 "dhchap_digests": [ 00:37:06.748 "sha256", 00:37:06.748 "sha384", 00:37:06.748 "sha512" 00:37:06.748 ], 00:37:06.748 "dhchap_dhgroups": [ 00:37:06.748 "null", 00:37:06.748 "ffdhe2048", 00:37:06.748 "ffdhe3072", 00:37:06.748 "ffdhe4096", 00:37:06.748 "ffdhe6144", 00:37:06.748 "ffdhe8192" 00:37:06.748 ] 00:37:06.748 } 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "method": "bdev_nvme_set_hotplug", 00:37:06.748 "params": { 00:37:06.748 "period_us": 100000, 00:37:06.748 "enable": false 00:37:06.748 } 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "method": "bdev_wait_for_examine" 00:37:06.748 } 00:37:06.748 ] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "scsi", 00:37:06.748 "config": null 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "scheduler", 00:37:06.748 "config": [ 00:37:06.748 { 00:37:06.748 "method": "framework_set_scheduler", 00:37:06.748 "params": { 00:37:06.748 "name": "static" 00:37:06.748 } 00:37:06.748 } 00:37:06.748 ] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "vhost_scsi", 00:37:06.748 "config": [] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "vhost_blk", 00:37:06.748 "config": [] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "ublk", 00:37:06.748 "config": [] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "nbd", 00:37:06.748 "config": [] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "nvmf", 00:37:06.748 "config": [ 00:37:06.748 { 00:37:06.748 "method": "nvmf_set_config", 00:37:06.748 "params": { 00:37:06.748 "discovery_filter": "match_any", 00:37:06.748 "admin_cmd_passthru": { 00:37:06.748 "identify_ctrlr": false 00:37:06.748 }, 00:37:06.748 "dhchap_digests": [ 00:37:06.748 "sha256", 00:37:06.748 "sha384", 00:37:06.748 "sha512" 00:37:06.748 ], 00:37:06.748 "dhchap_dhgroups": [ 00:37:06.748 "null", 00:37:06.748 "ffdhe2048", 00:37:06.748 "ffdhe3072", 00:37:06.748 "ffdhe4096", 00:37:06.748 "ffdhe6144", 00:37:06.748 "ffdhe8192" 00:37:06.748 ] 00:37:06.748 } 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "method": "nvmf_set_max_subsystems", 00:37:06.748 "params": { 00:37:06.748 "max_subsystems": 1024 00:37:06.748 } 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "method": "nvmf_set_crdt", 00:37:06.748 "params": { 00:37:06.748 "crdt1": 0, 00:37:06.748 "crdt2": 0, 00:37:06.748 "crdt3": 0 00:37:06.748 } 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "method": "nvmf_create_transport", 00:37:06.748 "params": { 00:37:06.748 "trtype": "TCP", 00:37:06.748 "max_queue_depth": 128, 00:37:06.748 "max_io_qpairs_per_ctrlr": 127, 00:37:06.748 "in_capsule_data_size": 4096, 00:37:06.748 "max_io_size": 131072, 00:37:06.748 "io_unit_size": 131072, 00:37:06.748 "max_aq_depth": 128, 00:37:06.748 "num_shared_buffers": 511, 00:37:06.748 "buf_cache_size": 4294967295, 00:37:06.748 "dif_insert_or_strip": false, 00:37:06.748 "zcopy": false, 00:37:06.748 "c2h_success": true, 00:37:06.748 "sock_priority": 0, 00:37:06.748 "abort_timeout_sec": 1, 00:37:06.748 "ack_timeout": 0, 00:37:06.748 "data_wr_pool_size": 0 00:37:06.748 } 00:37:06.748 } 00:37:06.748 ] 00:37:06.748 }, 00:37:06.748 { 00:37:06.748 "subsystem": "iscsi", 00:37:06.748 "config": [ 00:37:06.748 { 00:37:06.748 "method": "iscsi_set_options", 00:37:06.748 "params": { 00:37:06.748 "node_base": "iqn.2016-06.io.spdk", 00:37:06.748 "max_sessions": 128, 00:37:06.748 "max_connections_per_session": 2, 00:37:06.748 "max_queue_depth": 64, 00:37:06.748 "default_time2wait": 2, 00:37:06.748 "default_time2retain": 20, 00:37:06.748 "first_burst_length": 8192, 00:37:06.748 "immediate_data": true, 00:37:06.748 "allow_duplicated_isid": false, 00:37:06.748 "error_recovery_level": 0, 00:37:06.748 "nop_timeout": 60, 00:37:06.748 "nop_in_interval": 30, 00:37:06.748 "disable_chap": false, 00:37:06.748 "require_chap": false, 00:37:06.748 "mutual_chap": false, 00:37:06.748 "chap_group": 0, 00:37:06.748 "max_large_datain_per_connection": 64, 00:37:06.748 "max_r2t_per_connection": 4, 00:37:06.748 "pdu_pool_size": 36864, 00:37:06.748 "immediate_data_pool_size": 16384, 00:37:06.748 "data_out_pool_size": 2048 00:37:06.748 } 00:37:06.748 } 00:37:06.748 ] 00:37:06.748 } 00:37:06.748 ] 00:37:06.748 } 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56996 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56996 ']' 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56996 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56996 00:37:06.748 killing process with pid 56996 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56996' 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56996 00:37:06.748 05:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56996 00:37:07.007 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57016 00:37:07.007 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:37:07.007 05:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57016 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57016 ']' 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57016 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57016 00:37:12.275 killing process with pid 57016 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57016' 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57016 00:37:12.275 05:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57016 00:37:12.275 05:27:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:37:12.275 05:27:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:37:12.275 ************************************ 00:37:12.275 END TEST skip_rpc_with_json 00:37:12.275 ************************************ 00:37:12.275 00:37:12.275 real 0m6.258s 00:37:12.275 user 0m5.991s 00:37:12.275 sys 0m0.425s 00:37:12.275 05:27:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.275 05:27:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:12.533 05:27:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:37:12.533 05:27:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:12.533 05:27:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.533 05:27:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:12.533 ************************************ 00:37:12.533 START TEST skip_rpc_with_delay 00:37:12.533 ************************************ 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:37:12.533 [2024-12-09 05:27:59.338380] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:12.533 00:37:12.533 real 0m0.089s 00:37:12.533 user 0m0.058s 00:37:12.533 sys 0m0.030s 00:37:12.533 ************************************ 00:37:12.533 END TEST skip_rpc_with_delay 00:37:12.533 ************************************ 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.533 05:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:37:12.533 05:27:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:37:12.533 05:27:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:37:12.533 05:27:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:37:12.533 05:27:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:12.533 05:27:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.533 05:27:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:12.534 ************************************ 00:37:12.534 START TEST exit_on_failed_rpc_init 00:37:12.534 ************************************ 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:37:12.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57120 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57120 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57120 ']' 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:12.534 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:37:12.534 [2024-12-09 05:27:59.468192] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:12.534 [2024-12-09 05:27:59.468492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57120 ] 00:37:12.793 [2024-12-09 05:27:59.615708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.793 [2024-12-09 05:27:59.646644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.793 [2024-12-09 05:27:59.689343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:37:13.052 05:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:37:13.052 [2024-12-09 05:27:59.876058] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:13.052 [2024-12-09 05:27:59.876154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57130 ] 00:37:13.312 [2024-12-09 05:28:00.028103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.312 [2024-12-09 05:28:00.068997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.312 [2024-12-09 05:28:00.069345] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:13.312 [2024-12-09 05:28:00.069585] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:13.312 [2024-12-09 05:28:00.069722] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57120 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57120 ']' 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57120 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57120 00:37:13.312 killing process with pid 57120 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57120' 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57120 00:37:13.312 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57120 00:37:13.571 00:37:13.571 real 0m1.062s 00:37:13.571 user 0m1.291s 00:37:13.571 sys 0m0.252s 00:37:13.571 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:13.571 ************************************ 00:37:13.571 05:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:37:13.571 END TEST exit_on_failed_rpc_init 00:37:13.571 ************************************ 00:37:13.571 05:28:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:37:13.571 00:37:13.571 real 0m13.155s 00:37:13.571 user 0m12.578s 00:37:13.571 sys 0m1.095s 00:37:13.571 05:28:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:13.571 05:28:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.571 ************************************ 00:37:13.571 END TEST skip_rpc 00:37:13.571 ************************************ 00:37:13.830 05:28:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:37:13.830 05:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:13.830 05:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:13.830 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:37:13.830 ************************************ 00:37:13.830 START TEST rpc_client 00:37:13.830 ************************************ 00:37:13.830 05:28:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:37:13.830 * Looking for test storage... 00:37:13.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:37:13.830 05:28:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:13.831 05:28:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.831 --rc genhtml_branch_coverage=1 00:37:13.831 --rc genhtml_function_coverage=1 00:37:13.831 --rc genhtml_legend=1 00:37:13.831 --rc geninfo_all_blocks=1 00:37:13.831 --rc geninfo_unexecuted_blocks=1 00:37:13.831 00:37:13.831 ' 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.831 --rc genhtml_branch_coverage=1 00:37:13.831 --rc genhtml_function_coverage=1 00:37:13.831 --rc genhtml_legend=1 00:37:13.831 --rc geninfo_all_blocks=1 00:37:13.831 --rc geninfo_unexecuted_blocks=1 00:37:13.831 00:37:13.831 ' 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.831 --rc genhtml_branch_coverage=1 00:37:13.831 --rc genhtml_function_coverage=1 00:37:13.831 --rc genhtml_legend=1 00:37:13.831 --rc geninfo_all_blocks=1 00:37:13.831 --rc geninfo_unexecuted_blocks=1 00:37:13.831 00:37:13.831 ' 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.831 --rc genhtml_branch_coverage=1 00:37:13.831 --rc genhtml_function_coverage=1 00:37:13.831 --rc genhtml_legend=1 00:37:13.831 --rc geninfo_all_blocks=1 00:37:13.831 --rc geninfo_unexecuted_blocks=1 00:37:13.831 00:37:13.831 ' 00:37:13.831 05:28:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:37:13.831 OK 00:37:13.831 05:28:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:37:13.831 00:37:13.831 real 0m0.202s 00:37:13.831 user 0m0.127s 00:37:13.831 sys 0m0.086s 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:13.831 ************************************ 00:37:13.831 END TEST rpc_client 00:37:13.831 ************************************ 00:37:13.831 05:28:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:37:14.091 05:28:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:37:14.091 05:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:14.091 05:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:14.091 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:37:14.091 ************************************ 00:37:14.091 START TEST json_config 00:37:14.091 ************************************ 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.091 05:28:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.091 05:28:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.091 05:28:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.091 05:28:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.091 05:28:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.091 05:28:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:37:14.091 05:28:00 json_config -- scripts/common.sh@345 -- # : 1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.091 05:28:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.091 05:28:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@353 -- # local d=1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.091 05:28:00 json_config -- scripts/common.sh@355 -- # echo 1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.091 05:28:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@353 -- # local d=2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.091 05:28:00 json_config -- scripts/common.sh@355 -- # echo 2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.091 05:28:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.091 05:28:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.091 05:28:00 json_config -- scripts/common.sh@368 -- # return 0 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:14.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.091 --rc genhtml_branch_coverage=1 00:37:14.091 --rc genhtml_function_coverage=1 00:37:14.091 --rc genhtml_legend=1 00:37:14.091 --rc geninfo_all_blocks=1 00:37:14.091 --rc geninfo_unexecuted_blocks=1 00:37:14.091 00:37:14.091 ' 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:14.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.091 --rc genhtml_branch_coverage=1 00:37:14.091 --rc genhtml_function_coverage=1 00:37:14.091 --rc genhtml_legend=1 00:37:14.091 --rc geninfo_all_blocks=1 00:37:14.091 --rc geninfo_unexecuted_blocks=1 00:37:14.091 00:37:14.091 ' 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:14.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.091 --rc genhtml_branch_coverage=1 00:37:14.091 --rc genhtml_function_coverage=1 00:37:14.091 --rc genhtml_legend=1 00:37:14.091 --rc geninfo_all_blocks=1 00:37:14.091 --rc geninfo_unexecuted_blocks=1 00:37:14.091 00:37:14.091 ' 00:37:14.091 05:28:00 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:14.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.091 --rc genhtml_branch_coverage=1 00:37:14.091 --rc genhtml_function_coverage=1 00:37:14.091 --rc genhtml_legend=1 00:37:14.091 --rc geninfo_all_blocks=1 00:37:14.091 --rc geninfo_unexecuted_blocks=1 00:37:14.091 00:37:14.091 ' 00:37:14.091 05:28:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:14.091 05:28:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:14.091 05:28:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:14.091 05:28:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:37:14.091 05:28:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:14.091 05:28:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.091 05:28:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.091 05:28:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.091 05:28:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.092 05:28:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.092 05:28:01 json_config -- paths/export.sh@5 -- # export PATH 00:37:14.092 05:28:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@51 -- # : 0 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:14.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:14.092 05:28:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:37:14.092 INFO: JSON configuration test init 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:14.092 05:28:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:37:14.092 05:28:01 json_config -- json_config/common.sh@9 -- # local app=target 00:37:14.092 05:28:01 json_config -- json_config/common.sh@10 -- # shift 00:37:14.092 05:28:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:37:14.092 05:28:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:37:14.092 05:28:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:37:14.092 05:28:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:14.092 05:28:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:14.092 05:28:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57264 00:37:14.092 Waiting for target to run... 00:37:14.092 05:28:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:37:14.092 05:28:01 json_config -- json_config/common.sh@25 -- # waitforlisten 57264 /var/tmp/spdk_tgt.sock 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 57264 ']' 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:37:14.092 05:28:01 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.092 05:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:14.351 [2024-12-09 05:28:01.102289] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:14.351 [2024-12-09 05:28:01.102402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57264 ] 00:37:14.608 [2024-12-09 05:28:01.436756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.608 [2024-12-09 05:28:01.457883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.175 05:28:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:15.175 05:28:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:37:15.175 00:37:15.175 05:28:02 json_config -- json_config/common.sh@26 -- # echo '' 00:37:15.175 05:28:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:37:15.175 05:28:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:37:15.175 05:28:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:15.175 05:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:15.175 05:28:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:37:15.175 05:28:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:37:15.175 05:28:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:15.175 05:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:15.175 05:28:02 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:37:15.175 05:28:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:37:15.175 05:28:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:37:15.742 [2024-12-09 05:28:02.439620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:37:15.742 05:28:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:15.742 05:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:37:15.742 05:28:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:37:15.742 05:28:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@51 -- # local get_types 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@54 -- # sort 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:37:16.001 05:28:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:16.001 05:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@62 -- # return 0 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:37:16.001 05:28:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:16.001 05:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:37:16.001 05:28:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:37:16.001 05:28:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:37:16.568 MallocForNvmf0 00:37:16.568 05:28:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:37:16.568 05:28:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:37:16.826 MallocForNvmf1 00:37:16.826 05:28:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:37:16.826 05:28:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:37:17.083 [2024-12-09 05:28:03.870949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.083 05:28:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:17.083 05:28:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:17.342 05:28:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:37:17.342 05:28:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:37:17.600 05:28:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:37:17.600 05:28:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:37:17.858 05:28:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:37:17.858 05:28:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:37:18.118 [2024-12-09 05:28:05.007806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:18.118 05:28:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:37:18.118 05:28:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.118 05:28:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:18.118 05:28:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:37:18.118 05:28:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.118 05:28:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:18.384 05:28:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:37:18.384 05:28:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:37:18.384 05:28:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:37:18.644 MallocBdevForConfigChangeCheck 00:37:18.644 05:28:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:37:18.644 05:28:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.644 05:28:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:18.644 05:28:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:37:18.644 05:28:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:37:19.209 INFO: shutting down applications... 00:37:19.209 05:28:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:37:19.209 05:28:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:37:19.209 05:28:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:37:19.209 05:28:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:37:19.209 05:28:05 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:37:19.467 Calling clear_iscsi_subsystem 00:37:19.467 Calling clear_nvmf_subsystem 00:37:19.467 Calling clear_nbd_subsystem 00:37:19.467 Calling clear_ublk_subsystem 00:37:19.467 Calling clear_vhost_blk_subsystem 00:37:19.467 Calling clear_vhost_scsi_subsystem 00:37:19.467 Calling clear_bdev_subsystem 00:37:19.467 05:28:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:37:19.467 05:28:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:37:19.467 05:28:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:37:19.467 05:28:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:37:19.467 05:28:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:37:19.467 05:28:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:37:20.032 05:28:06 json_config -- json_config/json_config.sh@352 -- # break 00:37:20.032 05:28:06 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:37:20.032 05:28:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:37:20.032 05:28:06 json_config -- json_config/common.sh@31 -- # local app=target 00:37:20.032 05:28:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:37:20.032 05:28:06 json_config -- json_config/common.sh@35 -- # [[ -n 57264 ]] 00:37:20.032 05:28:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57264 00:37:20.032 05:28:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:37:20.032 05:28:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:37:20.032 05:28:06 json_config -- json_config/common.sh@41 -- # kill -0 57264 00:37:20.032 05:28:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:37:20.291 05:28:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:37:20.291 05:28:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:37:20.291 05:28:07 json_config -- json_config/common.sh@41 -- # kill -0 57264 00:37:20.291 05:28:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:37:20.291 05:28:07 json_config -- json_config/common.sh@43 -- # break 00:37:20.291 05:28:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:37:20.291 SPDK target shutdown done 00:37:20.291 05:28:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:37:20.291 INFO: relaunching applications... 00:37:20.291 05:28:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:37:20.291 05:28:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:20.291 05:28:07 json_config -- json_config/common.sh@9 -- # local app=target 00:37:20.291 05:28:07 json_config -- json_config/common.sh@10 -- # shift 00:37:20.291 05:28:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:37:20.291 05:28:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:37:20.291 05:28:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:37:20.291 05:28:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:20.291 05:28:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:20.291 05:28:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57470 00:37:20.291 05:28:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:20.291 05:28:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:37:20.291 05:28:07 json_config -- json_config/common.sh@25 -- # waitforlisten 57470 /var/tmp/spdk_tgt.sock 00:37:20.291 Waiting for target to run... 00:37:20.291 05:28:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 57470 ']' 00:37:20.291 05:28:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:37:20.291 05:28:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:37:20.291 05:28:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:37:20.291 05:28:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.291 05:28:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:20.549 [2024-12-09 05:28:07.278661] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:20.549 [2024-12-09 05:28:07.278782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57470 ] 00:37:20.869 [2024-12-09 05:28:07.608645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.869 [2024-12-09 05:28:07.636649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.869 [2024-12-09 05:28:07.769266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:21.128 [2024-12-09 05:28:07.972522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.128 [2024-12-09 05:28:08.004592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:21.386 05:28:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.386 05:28:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:37:21.386 00:37:21.386 05:28:08 json_config -- json_config/common.sh@26 -- # echo '' 00:37:21.386 05:28:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:37:21.386 INFO: Checking if target configuration is the same... 00:37:21.386 05:28:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:37:21.386 05:28:08 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:21.386 05:28:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:37:21.386 05:28:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:37:21.386 + '[' 2 -ne 2 ']' 00:37:21.386 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:37:21.386 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:37:21.386 + rootdir=/home/vagrant/spdk_repo/spdk 00:37:21.386 +++ basename /dev/fd/62 00:37:21.386 ++ mktemp /tmp/62.XXX 00:37:21.386 + tmp_file_1=/tmp/62.NHo 00:37:21.386 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:21.386 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:37:21.386 + tmp_file_2=/tmp/spdk_tgt_config.json.amE 00:37:21.386 + ret=0 00:37:21.386 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:37:21.950 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:37:21.950 + diff -u /tmp/62.NHo /tmp/spdk_tgt_config.json.amE 00:37:21.950 INFO: JSON config files are the same 00:37:21.950 + echo 'INFO: JSON config files are the same' 00:37:21.950 + rm /tmp/62.NHo /tmp/spdk_tgt_config.json.amE 00:37:21.950 + exit 0 00:37:21.950 05:28:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:37:21.950 INFO: changing configuration and checking if this can be detected... 00:37:21.950 05:28:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:37:21.950 05:28:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:37:21.950 05:28:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:37:22.207 05:28:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:37:22.207 05:28:09 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:22.207 05:28:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:37:22.207 + '[' 2 -ne 2 ']' 00:37:22.207 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:37:22.207 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:37:22.207 + rootdir=/home/vagrant/spdk_repo/spdk 00:37:22.207 +++ basename /dev/fd/62 00:37:22.207 ++ mktemp /tmp/62.XXX 00:37:22.207 + tmp_file_1=/tmp/62.jFT 00:37:22.207 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:22.207 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:37:22.207 + tmp_file_2=/tmp/spdk_tgt_config.json.XXD 00:37:22.207 + ret=0 00:37:22.207 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:37:22.772 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:37:22.772 + diff -u /tmp/62.jFT /tmp/spdk_tgt_config.json.XXD 00:37:22.772 + ret=1 00:37:22.772 + echo '=== Start of file: /tmp/62.jFT ===' 00:37:22.772 + cat /tmp/62.jFT 00:37:22.772 + echo '=== End of file: /tmp/62.jFT ===' 00:37:22.772 + echo '' 00:37:22.772 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XXD ===' 00:37:22.772 + cat /tmp/spdk_tgt_config.json.XXD 00:37:22.772 + echo '=== End of file: /tmp/spdk_tgt_config.json.XXD ===' 00:37:22.772 + echo '' 00:37:22.772 + rm /tmp/62.jFT /tmp/spdk_tgt_config.json.XXD 00:37:22.772 + exit 1 00:37:22.772 INFO: configuration change detected. 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 57470 ]] 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:22.772 05:28:09 json_config -- json_config/json_config.sh@330 -- # killprocess 57470 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 57470 ']' 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@958 -- # kill -0 57470 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@959 -- # uname 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:22.772 05:28:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57470 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:23.031 killing process with pid 57470 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57470' 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@973 -- # kill 57470 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@978 -- # wait 57470 00:37:23.031 05:28:09 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:37:23.031 05:28:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.031 05:28:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:23.290 05:28:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:37:23.290 INFO: Success 00:37:23.290 05:28:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:37:23.290 00:37:23.290 real 0m9.175s 00:37:23.290 user 0m13.665s 00:37:23.290 sys 0m1.559s 00:37:23.290 05:28:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.290 05:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:23.290 ************************************ 00:37:23.290 END TEST json_config 00:37:23.290 ************************************ 00:37:23.290 05:28:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:37:23.290 05:28:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.290 05:28:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.290 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:37:23.290 ************************************ 00:37:23.290 START TEST json_config_extra_key 00:37:23.290 ************************************ 00:37:23.290 05:28:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:37:23.290 05:28:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:23.290 05:28:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:23.290 05:28:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:37:23.290 05:28:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.290 05:28:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:37:23.290 05:28:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.291 05:28:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.291 --rc genhtml_branch_coverage=1 00:37:23.291 --rc genhtml_function_coverage=1 00:37:23.291 --rc genhtml_legend=1 00:37:23.291 --rc geninfo_all_blocks=1 00:37:23.291 --rc geninfo_unexecuted_blocks=1 00:37:23.291 00:37:23.291 ' 00:37:23.291 05:28:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.291 --rc genhtml_branch_coverage=1 00:37:23.291 --rc genhtml_function_coverage=1 00:37:23.291 --rc genhtml_legend=1 00:37:23.291 --rc geninfo_all_blocks=1 00:37:23.291 --rc geninfo_unexecuted_blocks=1 00:37:23.291 00:37:23.291 ' 00:37:23.291 05:28:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.291 --rc genhtml_branch_coverage=1 00:37:23.291 --rc genhtml_function_coverage=1 00:37:23.291 --rc genhtml_legend=1 00:37:23.291 --rc geninfo_all_blocks=1 00:37:23.291 --rc geninfo_unexecuted_blocks=1 00:37:23.291 00:37:23.291 ' 00:37:23.291 05:28:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.291 --rc genhtml_branch_coverage=1 00:37:23.291 --rc genhtml_function_coverage=1 00:37:23.291 --rc genhtml_legend=1 00:37:23.291 --rc geninfo_all_blocks=1 00:37:23.291 --rc geninfo_unexecuted_blocks=1 00:37:23.291 00:37:23.291 ' 00:37:23.291 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:23.291 05:28:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.291 05:28:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.291 05:28:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.291 05:28:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.291 05:28:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.291 05:28:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.291 05:28:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.291 05:28:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:37:23.291 05:28:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:23.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.291 05:28:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.550 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:37:23.550 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:37:23.550 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:37:23.551 INFO: launching applications... 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:37:23.551 05:28:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57620 00:37:23.551 Waiting for target to run... 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57620 /var/tmp/spdk_tgt.sock 00:37:23.551 05:28:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57620 ']' 00:37:23.551 05:28:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:37:23.551 05:28:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:37:23.551 05:28:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:37:23.551 05:28:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:37:23.551 05:28:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.551 05:28:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:37:23.551 [2024-12-09 05:28:10.329114] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:23.551 [2024-12-09 05:28:10.329216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57620 ] 00:37:23.810 [2024-12-09 05:28:10.647490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.810 [2024-12-09 05:28:10.673811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.810 [2024-12-09 05:28:10.700471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:24.745 05:28:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.745 05:28:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:37:24.745 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:37:24.745 INFO: shutting down applications... 00:37:24.745 05:28:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:37:24.745 05:28:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57620 ]] 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57620 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:37:24.745 05:28:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57620 00:37:24.746 05:28:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57620 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:37:25.004 SPDK target shutdown done 00:37:25.004 05:28:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:37:25.004 Success 00:37:25.004 05:28:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:37:25.004 00:37:25.004 real 0m1.858s 00:37:25.004 user 0m1.862s 00:37:25.004 sys 0m0.326s 00:37:25.004 05:28:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:25.004 05:28:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:37:25.004 ************************************ 00:37:25.004 END TEST json_config_extra_key 00:37:25.005 ************************************ 00:37:25.005 05:28:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:37:25.005 05:28:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:25.005 05:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:25.005 05:28:11 -- common/autotest_common.sh@10 -- # set +x 00:37:25.005 ************************************ 00:37:25.005 START TEST alias_rpc 00:37:25.005 ************************************ 00:37:25.005 05:28:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:37:25.263 * Looking for test storage... 00:37:25.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:25.263 05:28:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.263 --rc genhtml_branch_coverage=1 00:37:25.263 --rc genhtml_function_coverage=1 00:37:25.263 --rc genhtml_legend=1 00:37:25.263 --rc geninfo_all_blocks=1 00:37:25.263 --rc geninfo_unexecuted_blocks=1 00:37:25.263 00:37:25.263 ' 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.263 --rc genhtml_branch_coverage=1 00:37:25.263 --rc genhtml_function_coverage=1 00:37:25.263 --rc genhtml_legend=1 00:37:25.263 --rc geninfo_all_blocks=1 00:37:25.263 --rc geninfo_unexecuted_blocks=1 00:37:25.263 00:37:25.263 ' 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.263 --rc genhtml_branch_coverage=1 00:37:25.263 --rc genhtml_function_coverage=1 00:37:25.263 --rc genhtml_legend=1 00:37:25.263 --rc geninfo_all_blocks=1 00:37:25.263 --rc geninfo_unexecuted_blocks=1 00:37:25.263 00:37:25.263 ' 00:37:25.263 05:28:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.263 --rc genhtml_branch_coverage=1 00:37:25.263 --rc genhtml_function_coverage=1 00:37:25.263 --rc genhtml_legend=1 00:37:25.263 --rc geninfo_all_blocks=1 00:37:25.263 --rc geninfo_unexecuted_blocks=1 00:37:25.263 00:37:25.263 ' 00:37:25.263 05:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:37:25.264 05:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57697 00:37:25.264 05:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:25.264 05:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57697 00:37:25.264 05:28:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57697 ']' 00:37:25.264 05:28:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.264 05:28:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.264 05:28:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.264 05:28:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.264 05:28:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:25.264 [2024-12-09 05:28:12.201969] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:25.264 [2024-12-09 05:28:12.202079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57697 ] 00:37:25.522 [2024-12-09 05:28:12.348919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.522 [2024-12-09 05:28:12.383303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.522 [2024-12-09 05:28:12.425447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:25.781 05:28:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.781 05:28:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:25.781 05:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:37:26.040 05:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57697 00:37:26.040 05:28:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57697 ']' 00:37:26.040 05:28:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57697 00:37:26.040 05:28:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:37:26.040 05:28:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:26.040 05:28:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57697 00:37:26.041 05:28:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:26.041 05:28:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:26.041 killing process with pid 57697 00:37:26.041 05:28:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57697' 00:37:26.041 05:28:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 57697 00:37:26.041 05:28:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 57697 00:37:26.300 00:37:26.300 real 0m1.247s 00:37:26.300 user 0m1.414s 00:37:26.300 sys 0m0.323s 00:37:26.300 05:28:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:26.300 05:28:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:26.300 ************************************ 00:37:26.300 END TEST alias_rpc 00:37:26.300 ************************************ 00:37:26.300 05:28:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:37:26.300 05:28:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:37:26.300 05:28:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:26.300 05:28:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.300 05:28:13 -- common/autotest_common.sh@10 -- # set +x 00:37:26.300 ************************************ 00:37:26.300 START TEST spdkcli_tcp 00:37:26.300 ************************************ 00:37:26.300 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:37:26.559 * Looking for test storage... 00:37:26.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:26.559 05:28:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.559 --rc genhtml_branch_coverage=1 00:37:26.559 --rc genhtml_function_coverage=1 00:37:26.559 --rc genhtml_legend=1 00:37:26.559 --rc geninfo_all_blocks=1 00:37:26.559 --rc geninfo_unexecuted_blocks=1 00:37:26.559 00:37:26.559 ' 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.559 --rc genhtml_branch_coverage=1 00:37:26.559 --rc genhtml_function_coverage=1 00:37:26.559 --rc genhtml_legend=1 00:37:26.559 --rc geninfo_all_blocks=1 00:37:26.559 --rc geninfo_unexecuted_blocks=1 00:37:26.559 00:37:26.559 ' 00:37:26.559 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.559 --rc genhtml_branch_coverage=1 00:37:26.559 --rc genhtml_function_coverage=1 00:37:26.559 --rc genhtml_legend=1 00:37:26.559 --rc geninfo_all_blocks=1 00:37:26.560 --rc geninfo_unexecuted_blocks=1 00:37:26.560 00:37:26.560 ' 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.560 --rc genhtml_branch_coverage=1 00:37:26.560 --rc genhtml_function_coverage=1 00:37:26.560 --rc genhtml_legend=1 00:37:26.560 --rc geninfo_all_blocks=1 00:37:26.560 --rc geninfo_unexecuted_blocks=1 00:37:26.560 00:37:26.560 ' 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57774 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57774 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57774 ']' 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.560 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:37:26.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:26.560 05:28:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.560 [2024-12-09 05:28:13.512590] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:26.560 [2024-12-09 05:28:13.512711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57774 ] 00:37:26.819 [2024-12-09 05:28:13.662272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:26.819 [2024-12-09 05:28:13.697473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.819 [2024-12-09 05:28:13.697481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.819 [2024-12-09 05:28:13.740494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:27.077 05:28:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:27.077 05:28:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:27.077 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57778 00:37:27.077 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:37:27.077 05:28:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:37:27.336 [ 00:37:27.336 "bdev_malloc_delete", 00:37:27.336 "bdev_malloc_create", 00:37:27.336 "bdev_null_resize", 00:37:27.336 "bdev_null_delete", 00:37:27.336 "bdev_null_create", 00:37:27.336 "bdev_nvme_cuse_unregister", 00:37:27.336 "bdev_nvme_cuse_register", 00:37:27.336 "bdev_opal_new_user", 00:37:27.336 "bdev_opal_set_lock_state", 00:37:27.336 "bdev_opal_delete", 00:37:27.336 "bdev_opal_get_info", 00:37:27.336 "bdev_opal_create", 00:37:27.336 "bdev_nvme_opal_revert", 00:37:27.336 "bdev_nvme_opal_init", 00:37:27.336 "bdev_nvme_send_cmd", 00:37:27.336 "bdev_nvme_set_keys", 00:37:27.336 "bdev_nvme_get_path_iostat", 00:37:27.336 "bdev_nvme_get_mdns_discovery_info", 00:37:27.336 "bdev_nvme_stop_mdns_discovery", 00:37:27.336 "bdev_nvme_start_mdns_discovery", 00:37:27.336 "bdev_nvme_set_multipath_policy", 00:37:27.336 "bdev_nvme_set_preferred_path", 00:37:27.336 "bdev_nvme_get_io_paths", 00:37:27.336 "bdev_nvme_remove_error_injection", 00:37:27.336 "bdev_nvme_add_error_injection", 00:37:27.336 "bdev_nvme_get_discovery_info", 00:37:27.336 "bdev_nvme_stop_discovery", 00:37:27.336 "bdev_nvme_start_discovery", 00:37:27.336 "bdev_nvme_get_controller_health_info", 00:37:27.336 "bdev_nvme_disable_controller", 00:37:27.336 "bdev_nvme_enable_controller", 00:37:27.336 "bdev_nvme_reset_controller", 00:37:27.336 "bdev_nvme_get_transport_statistics", 00:37:27.336 "bdev_nvme_apply_firmware", 00:37:27.336 "bdev_nvme_detach_controller", 00:37:27.336 "bdev_nvme_get_controllers", 00:37:27.336 "bdev_nvme_attach_controller", 00:37:27.336 "bdev_nvme_set_hotplug", 00:37:27.336 "bdev_nvme_set_options", 00:37:27.336 "bdev_passthru_delete", 00:37:27.336 "bdev_passthru_create", 00:37:27.336 "bdev_lvol_set_parent_bdev", 00:37:27.336 "bdev_lvol_set_parent", 00:37:27.336 "bdev_lvol_check_shallow_copy", 00:37:27.336 "bdev_lvol_start_shallow_copy", 00:37:27.336 "bdev_lvol_grow_lvstore", 00:37:27.336 "bdev_lvol_get_lvols", 00:37:27.336 "bdev_lvol_get_lvstores", 00:37:27.336 "bdev_lvol_delete", 00:37:27.336 "bdev_lvol_set_read_only", 00:37:27.336 "bdev_lvol_resize", 00:37:27.336 "bdev_lvol_decouple_parent", 00:37:27.336 "bdev_lvol_inflate", 00:37:27.336 "bdev_lvol_rename", 00:37:27.336 "bdev_lvol_clone_bdev", 00:37:27.336 "bdev_lvol_clone", 00:37:27.336 "bdev_lvol_snapshot", 00:37:27.336 "bdev_lvol_create", 00:37:27.336 "bdev_lvol_delete_lvstore", 00:37:27.336 "bdev_lvol_rename_lvstore", 00:37:27.336 "bdev_lvol_create_lvstore", 00:37:27.336 "bdev_raid_set_options", 00:37:27.336 "bdev_raid_remove_base_bdev", 00:37:27.336 "bdev_raid_add_base_bdev", 00:37:27.336 "bdev_raid_delete", 00:37:27.336 "bdev_raid_create", 00:37:27.336 "bdev_raid_get_bdevs", 00:37:27.336 "bdev_error_inject_error", 00:37:27.336 "bdev_error_delete", 00:37:27.336 "bdev_error_create", 00:37:27.336 "bdev_split_delete", 00:37:27.336 "bdev_split_create", 00:37:27.336 "bdev_delay_delete", 00:37:27.336 "bdev_delay_create", 00:37:27.337 "bdev_delay_update_latency", 00:37:27.337 "bdev_zone_block_delete", 00:37:27.337 "bdev_zone_block_create", 00:37:27.337 "blobfs_create", 00:37:27.337 "blobfs_detect", 00:37:27.337 "blobfs_set_cache_size", 00:37:27.337 "bdev_aio_delete", 00:37:27.337 "bdev_aio_rescan", 00:37:27.337 "bdev_aio_create", 00:37:27.337 "bdev_ftl_set_property", 00:37:27.337 "bdev_ftl_get_properties", 00:37:27.337 "bdev_ftl_get_stats", 00:37:27.337 "bdev_ftl_unmap", 00:37:27.337 "bdev_ftl_unload", 00:37:27.337 "bdev_ftl_delete", 00:37:27.337 "bdev_ftl_load", 00:37:27.337 "bdev_ftl_create", 00:37:27.337 "bdev_virtio_attach_controller", 00:37:27.337 "bdev_virtio_scsi_get_devices", 00:37:27.337 "bdev_virtio_detach_controller", 00:37:27.337 "bdev_virtio_blk_set_hotplug", 00:37:27.337 "bdev_iscsi_delete", 00:37:27.337 "bdev_iscsi_create", 00:37:27.337 "bdev_iscsi_set_options", 00:37:27.337 "bdev_uring_delete", 00:37:27.337 "bdev_uring_rescan", 00:37:27.337 "bdev_uring_create", 00:37:27.337 "accel_error_inject_error", 00:37:27.337 "ioat_scan_accel_module", 00:37:27.337 "dsa_scan_accel_module", 00:37:27.337 "iaa_scan_accel_module", 00:37:27.337 "keyring_file_remove_key", 00:37:27.337 "keyring_file_add_key", 00:37:27.337 "keyring_linux_set_options", 00:37:27.337 "fsdev_aio_delete", 00:37:27.337 "fsdev_aio_create", 00:37:27.337 "iscsi_get_histogram", 00:37:27.337 "iscsi_enable_histogram", 00:37:27.337 "iscsi_set_options", 00:37:27.337 "iscsi_get_auth_groups", 00:37:27.337 "iscsi_auth_group_remove_secret", 00:37:27.337 "iscsi_auth_group_add_secret", 00:37:27.337 "iscsi_delete_auth_group", 00:37:27.337 "iscsi_create_auth_group", 00:37:27.337 "iscsi_set_discovery_auth", 00:37:27.337 "iscsi_get_options", 00:37:27.337 "iscsi_target_node_request_logout", 00:37:27.337 "iscsi_target_node_set_redirect", 00:37:27.337 "iscsi_target_node_set_auth", 00:37:27.337 "iscsi_target_node_add_lun", 00:37:27.337 "iscsi_get_stats", 00:37:27.337 "iscsi_get_connections", 00:37:27.337 "iscsi_portal_group_set_auth", 00:37:27.337 "iscsi_start_portal_group", 00:37:27.337 "iscsi_delete_portal_group", 00:37:27.337 "iscsi_create_portal_group", 00:37:27.337 "iscsi_get_portal_groups", 00:37:27.337 "iscsi_delete_target_node", 00:37:27.337 "iscsi_target_node_remove_pg_ig_maps", 00:37:27.337 "iscsi_target_node_add_pg_ig_maps", 00:37:27.337 "iscsi_create_target_node", 00:37:27.337 "iscsi_get_target_nodes", 00:37:27.337 "iscsi_delete_initiator_group", 00:37:27.337 "iscsi_initiator_group_remove_initiators", 00:37:27.337 "iscsi_initiator_group_add_initiators", 00:37:27.337 "iscsi_create_initiator_group", 00:37:27.337 "iscsi_get_initiator_groups", 00:37:27.337 "nvmf_set_crdt", 00:37:27.337 "nvmf_set_config", 00:37:27.337 "nvmf_set_max_subsystems", 00:37:27.337 "nvmf_stop_mdns_prr", 00:37:27.337 "nvmf_publish_mdns_prr", 00:37:27.337 "nvmf_subsystem_get_listeners", 00:37:27.337 "nvmf_subsystem_get_qpairs", 00:37:27.337 "nvmf_subsystem_get_controllers", 00:37:27.337 "nvmf_get_stats", 00:37:27.337 "nvmf_get_transports", 00:37:27.337 "nvmf_create_transport", 00:37:27.337 "nvmf_get_targets", 00:37:27.337 "nvmf_delete_target", 00:37:27.337 "nvmf_create_target", 00:37:27.337 "nvmf_subsystem_allow_any_host", 00:37:27.337 "nvmf_subsystem_set_keys", 00:37:27.337 "nvmf_subsystem_remove_host", 00:37:27.337 "nvmf_subsystem_add_host", 00:37:27.337 "nvmf_ns_remove_host", 00:37:27.337 "nvmf_ns_add_host", 00:37:27.337 "nvmf_subsystem_remove_ns", 00:37:27.337 "nvmf_subsystem_set_ns_ana_group", 00:37:27.337 "nvmf_subsystem_add_ns", 00:37:27.337 "nvmf_subsystem_listener_set_ana_state", 00:37:27.337 "nvmf_discovery_get_referrals", 00:37:27.337 "nvmf_discovery_remove_referral", 00:37:27.337 "nvmf_discovery_add_referral", 00:37:27.337 "nvmf_subsystem_remove_listener", 00:37:27.337 "nvmf_subsystem_add_listener", 00:37:27.337 "nvmf_delete_subsystem", 00:37:27.337 "nvmf_create_subsystem", 00:37:27.337 "nvmf_get_subsystems", 00:37:27.337 "env_dpdk_get_mem_stats", 00:37:27.337 "nbd_get_disks", 00:37:27.337 "nbd_stop_disk", 00:37:27.337 "nbd_start_disk", 00:37:27.337 "ublk_recover_disk", 00:37:27.337 "ublk_get_disks", 00:37:27.337 "ublk_stop_disk", 00:37:27.337 "ublk_start_disk", 00:37:27.337 "ublk_destroy_target", 00:37:27.337 "ublk_create_target", 00:37:27.337 "virtio_blk_create_transport", 00:37:27.337 "virtio_blk_get_transports", 00:37:27.337 "vhost_controller_set_coalescing", 00:37:27.337 "vhost_get_controllers", 00:37:27.337 "vhost_delete_controller", 00:37:27.337 "vhost_create_blk_controller", 00:37:27.337 "vhost_scsi_controller_remove_target", 00:37:27.337 "vhost_scsi_controller_add_target", 00:37:27.337 "vhost_start_scsi_controller", 00:37:27.337 "vhost_create_scsi_controller", 00:37:27.337 "thread_set_cpumask", 00:37:27.337 "scheduler_set_options", 00:37:27.337 "framework_get_governor", 00:37:27.337 "framework_get_scheduler", 00:37:27.337 "framework_set_scheduler", 00:37:27.337 "framework_get_reactors", 00:37:27.337 "thread_get_io_channels", 00:37:27.337 "thread_get_pollers", 00:37:27.337 "thread_get_stats", 00:37:27.337 "framework_monitor_context_switch", 00:37:27.337 "spdk_kill_instance", 00:37:27.337 "log_enable_timestamps", 00:37:27.337 "log_get_flags", 00:37:27.337 "log_clear_flag", 00:37:27.337 "log_set_flag", 00:37:27.337 "log_get_level", 00:37:27.337 "log_set_level", 00:37:27.337 "log_get_print_level", 00:37:27.337 "log_set_print_level", 00:37:27.337 "framework_enable_cpumask_locks", 00:37:27.337 "framework_disable_cpumask_locks", 00:37:27.337 "framework_wait_init", 00:37:27.337 "framework_start_init", 00:37:27.337 "scsi_get_devices", 00:37:27.337 "bdev_get_histogram", 00:37:27.337 "bdev_enable_histogram", 00:37:27.337 "bdev_set_qos_limit", 00:37:27.337 "bdev_set_qd_sampling_period", 00:37:27.337 "bdev_get_bdevs", 00:37:27.337 "bdev_reset_iostat", 00:37:27.337 "bdev_get_iostat", 00:37:27.337 "bdev_examine", 00:37:27.337 "bdev_wait_for_examine", 00:37:27.337 "bdev_set_options", 00:37:27.337 "accel_get_stats", 00:37:27.337 "accel_set_options", 00:37:27.337 "accel_set_driver", 00:37:27.337 "accel_crypto_key_destroy", 00:37:27.337 "accel_crypto_keys_get", 00:37:27.337 "accel_crypto_key_create", 00:37:27.337 "accel_assign_opc", 00:37:27.337 "accel_get_module_info", 00:37:27.337 "accel_get_opc_assignments", 00:37:27.337 "vmd_rescan", 00:37:27.337 "vmd_remove_device", 00:37:27.337 "vmd_enable", 00:37:27.337 "sock_get_default_impl", 00:37:27.337 "sock_set_default_impl", 00:37:27.337 "sock_impl_set_options", 00:37:27.337 "sock_impl_get_options", 00:37:27.337 "iobuf_get_stats", 00:37:27.337 "iobuf_set_options", 00:37:27.337 "keyring_get_keys", 00:37:27.337 "framework_get_pci_devices", 00:37:27.337 "framework_get_config", 00:37:27.337 "framework_get_subsystems", 00:37:27.337 "fsdev_set_opts", 00:37:27.337 "fsdev_get_opts", 00:37:27.337 "trace_get_info", 00:37:27.337 "trace_get_tpoint_group_mask", 00:37:27.337 "trace_disable_tpoint_group", 00:37:27.337 "trace_enable_tpoint_group", 00:37:27.337 "trace_clear_tpoint_mask", 00:37:27.337 "trace_set_tpoint_mask", 00:37:27.337 "notify_get_notifications", 00:37:27.337 "notify_get_types", 00:37:27.337 "spdk_get_version", 00:37:27.337 "rpc_get_methods" 00:37:27.337 ] 00:37:27.337 05:28:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.337 05:28:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:27.337 05:28:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57774 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57774 ']' 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57774 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57774 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:27.337 killing process with pid 57774 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57774' 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57774 00:37:27.337 05:28:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57774 00:37:27.906 ************************************ 00:37:27.906 END TEST spdkcli_tcp 00:37:27.906 ************************************ 00:37:27.906 00:37:27.906 real 0m1.317s 00:37:27.906 user 0m2.344s 00:37:27.906 sys 0m0.354s 00:37:27.906 05:28:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.906 05:28:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.906 05:28:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:37:27.906 05:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:27.906 05:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.906 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:37:27.906 ************************************ 00:37:27.906 START TEST dpdk_mem_utility 00:37:27.906 ************************************ 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:37:27.906 * Looking for test storage... 00:37:27.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:27.906 05:28:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.906 --rc genhtml_branch_coverage=1 00:37:27.906 --rc genhtml_function_coverage=1 00:37:27.906 --rc genhtml_legend=1 00:37:27.906 --rc geninfo_all_blocks=1 00:37:27.906 --rc geninfo_unexecuted_blocks=1 00:37:27.906 00:37:27.906 ' 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.906 --rc genhtml_branch_coverage=1 00:37:27.906 --rc genhtml_function_coverage=1 00:37:27.906 --rc genhtml_legend=1 00:37:27.906 --rc geninfo_all_blocks=1 00:37:27.906 --rc geninfo_unexecuted_blocks=1 00:37:27.906 00:37:27.906 ' 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.906 --rc genhtml_branch_coverage=1 00:37:27.906 --rc genhtml_function_coverage=1 00:37:27.906 --rc genhtml_legend=1 00:37:27.906 --rc geninfo_all_blocks=1 00:37:27.906 --rc geninfo_unexecuted_blocks=1 00:37:27.906 00:37:27.906 ' 00:37:27.906 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.906 --rc genhtml_branch_coverage=1 00:37:27.906 --rc genhtml_function_coverage=1 00:37:27.906 --rc genhtml_legend=1 00:37:27.906 --rc geninfo_all_blocks=1 00:37:27.906 --rc geninfo_unexecuted_blocks=1 00:37:27.906 00:37:27.906 ' 00:37:27.907 05:28:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:37:27.907 05:28:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57860 00:37:27.907 05:28:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57860 00:37:27.907 05:28:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:27.907 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57860 ']' 00:37:27.907 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.907 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.907 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.907 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.907 05:28:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:37:27.907 [2024-12-09 05:28:14.862541] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:27.907 [2024-12-09 05:28:14.862649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57860 ] 00:37:28.165 [2024-12-09 05:28:15.013011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.165 [2024-12-09 05:28:15.047305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.165 [2024-12-09 05:28:15.090903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:29.101 05:28:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.101 05:28:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:37:29.101 05:28:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:37:29.101 05:28:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:37:29.101 05:28:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.101 05:28:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:37:29.101 { 00:37:29.101 "filename": "/tmp/spdk_mem_dump.txt" 00:37:29.101 } 00:37:29.101 05:28:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.101 05:28:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:37:29.101 DPDK memory size 818.000000 MiB in 1 heap(s) 00:37:29.101 1 heaps totaling size 818.000000 MiB 00:37:29.101 size: 818.000000 MiB heap id: 0 00:37:29.101 end heaps---------- 00:37:29.101 9 mempools totaling size 603.782043 MiB 00:37:29.101 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:37:29.101 size: 158.602051 MiB name: PDU_data_out_Pool 00:37:29.101 size: 100.555481 MiB name: bdev_io_57860 00:37:29.101 size: 50.003479 MiB name: msgpool_57860 00:37:29.101 size: 36.509338 MiB name: fsdev_io_57860 00:37:29.101 size: 21.763794 MiB name: PDU_Pool 00:37:29.101 size: 19.513306 MiB name: SCSI_TASK_Pool 00:37:29.101 size: 4.133484 MiB name: evtpool_57860 00:37:29.101 size: 0.026123 MiB name: Session_Pool 00:37:29.101 end mempools------- 00:37:29.101 6 memzones totaling size 4.142822 MiB 00:37:29.101 size: 1.000366 MiB name: RG_ring_0_57860 00:37:29.101 size: 1.000366 MiB name: RG_ring_1_57860 00:37:29.101 size: 1.000366 MiB name: RG_ring_4_57860 00:37:29.101 size: 1.000366 MiB name: RG_ring_5_57860 00:37:29.101 size: 0.125366 MiB name: RG_ring_2_57860 00:37:29.101 size: 0.015991 MiB name: RG_ring_3_57860 00:37:29.101 end memzones------- 00:37:29.101 05:28:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:37:29.101 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:37:29.101 list of free elements. size: 10.802490 MiB 00:37:29.101 element at address: 0x200019200000 with size: 0.999878 MiB 00:37:29.101 element at address: 0x200019400000 with size: 0.999878 MiB 00:37:29.101 element at address: 0x200032000000 with size: 0.994446 MiB 00:37:29.101 element at address: 0x200000400000 with size: 0.993958 MiB 00:37:29.101 element at address: 0x200006400000 with size: 0.959839 MiB 00:37:29.101 element at address: 0x200012c00000 with size: 0.944275 MiB 00:37:29.101 element at address: 0x200019600000 with size: 0.936584 MiB 00:37:29.101 element at address: 0x200000200000 with size: 0.717346 MiB 00:37:29.101 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:37:29.101 element at address: 0x20000a600000 with size: 0.488892 MiB 00:37:29.101 element at address: 0x200000c00000 with size: 0.486267 MiB 00:37:29.101 element at address: 0x200019800000 with size: 0.485657 MiB 00:37:29.101 element at address: 0x200003e00000 with size: 0.480286 MiB 00:37:29.101 element at address: 0x200028200000 with size: 0.395752 MiB 00:37:29.101 element at address: 0x200000800000 with size: 0.351746 MiB 00:37:29.101 list of standard malloc elements. size: 199.268616 MiB 00:37:29.101 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:37:29.101 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:37:29.101 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:37:29.101 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:37:29.101 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:37:29.101 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:37:29.101 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:37:29.101 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:37:29.101 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:37:29.101 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:37:29.101 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000085e580 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087e840 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087e900 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f080 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f140 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f200 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f380 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f440 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f500 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:37:29.101 element at address: 0x20000087f680 with size: 0.000183 MiB 00:37:29.102 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:37:29.102 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000cff000 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200003efb980 with size: 0.000183 MiB 00:37:29.102 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:37:29.102 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:37:29.103 element at address: 0x200028265500 with size: 0.000183 MiB 00:37:29.103 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c480 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c540 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c600 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c780 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c840 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c900 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d080 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d140 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d200 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d380 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d440 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d500 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d680 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d740 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d800 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826d980 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826da40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826db00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826de00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826df80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e040 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e100 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e280 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e340 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e400 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e580 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e640 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e700 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e880 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826e940 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f000 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f180 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f240 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f300 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f480 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f540 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f600 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f780 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f840 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f900 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:37:29.103 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:37:29.103 list of memzone associated elements. size: 607.928894 MiB 00:37:29.103 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:37:29.103 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:37:29.103 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:37:29.103 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:37:29.103 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:37:29.103 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57860_0 00:37:29.103 element at address: 0x200000dff380 with size: 48.003052 MiB 00:37:29.103 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57860_0 00:37:29.103 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:37:29.103 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57860_0 00:37:29.103 element at address: 0x2000199be940 with size: 20.255554 MiB 00:37:29.103 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:37:29.103 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:37:29.103 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:37:29.103 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:37:29.103 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57860_0 00:37:29.103 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:37:29.103 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57860 00:37:29.103 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:37:29.103 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57860 00:37:29.103 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:37:29.103 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:37:29.103 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:37:29.103 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:37:29.103 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:37:29.103 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:37:29.103 element at address: 0x200003efba40 with size: 1.008118 MiB 00:37:29.103 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:37:29.103 element at address: 0x200000cff180 with size: 1.000488 MiB 00:37:29.103 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57860 00:37:29.103 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:37:29.103 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57860 00:37:29.103 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:37:29.103 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57860 00:37:29.103 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:37:29.103 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57860 00:37:29.103 element at address: 0x20000087f740 with size: 0.500488 MiB 00:37:29.103 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57860 00:37:29.103 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:37:29.103 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57860 00:37:29.103 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:37:29.103 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:37:29.103 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:37:29.103 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:37:29.103 element at address: 0x20001987c540 with size: 0.250488 MiB 00:37:29.103 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:37:29.103 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:37:29.103 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57860 00:37:29.103 element at address: 0x20000085e640 with size: 0.125488 MiB 00:37:29.103 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57860 00:37:29.104 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:37:29.104 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:37:29.104 element at address: 0x200028265680 with size: 0.023743 MiB 00:37:29.104 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:37:29.104 element at address: 0x20000085a380 with size: 0.016113 MiB 00:37:29.104 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57860 00:37:29.104 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:37:29.104 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:37:29.104 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:37:29.104 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57860 00:37:29.104 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:37:29.104 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57860 00:37:29.104 element at address: 0x20000085a180 with size: 0.000305 MiB 00:37:29.104 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57860 00:37:29.104 element at address: 0x20002826c280 with size: 0.000305 MiB 00:37:29.104 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:37:29.104 05:28:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:37:29.104 05:28:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57860 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57860 ']' 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57860 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57860 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:29.104 killing process with pid 57860 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57860' 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57860 00:37:29.104 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57860 00:37:29.670 00:37:29.670 real 0m1.735s 00:37:29.670 user 0m2.002s 00:37:29.670 sys 0m0.358s 00:37:29.670 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.670 ************************************ 00:37:29.670 END TEST dpdk_mem_utility 00:37:29.670 ************************************ 00:37:29.670 05:28:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:37:29.670 05:28:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:37:29.670 05:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:29.670 05:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.670 05:28:16 -- common/autotest_common.sh@10 -- # set +x 00:37:29.670 ************************************ 00:37:29.670 START TEST event 00:37:29.670 ************************************ 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:37:29.670 * Looking for test storage... 00:37:29.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1693 -- # lcov --version 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:29.670 05:28:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.670 05:28:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.670 05:28:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.670 05:28:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.670 05:28:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.670 05:28:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.670 05:28:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.670 05:28:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.670 05:28:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.670 05:28:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.670 05:28:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.670 05:28:16 event -- scripts/common.sh@344 -- # case "$op" in 00:37:29.670 05:28:16 event -- scripts/common.sh@345 -- # : 1 00:37:29.670 05:28:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.670 05:28:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.670 05:28:16 event -- scripts/common.sh@365 -- # decimal 1 00:37:29.670 05:28:16 event -- scripts/common.sh@353 -- # local d=1 00:37:29.670 05:28:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.670 05:28:16 event -- scripts/common.sh@355 -- # echo 1 00:37:29.670 05:28:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.670 05:28:16 event -- scripts/common.sh@366 -- # decimal 2 00:37:29.670 05:28:16 event -- scripts/common.sh@353 -- # local d=2 00:37:29.670 05:28:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.670 05:28:16 event -- scripts/common.sh@355 -- # echo 2 00:37:29.670 05:28:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.670 05:28:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.670 05:28:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.670 05:28:16 event -- scripts/common.sh@368 -- # return 0 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:29.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.670 --rc genhtml_branch_coverage=1 00:37:29.670 --rc genhtml_function_coverage=1 00:37:29.670 --rc genhtml_legend=1 00:37:29.670 --rc geninfo_all_blocks=1 00:37:29.670 --rc geninfo_unexecuted_blocks=1 00:37:29.670 00:37:29.670 ' 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:29.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.670 --rc genhtml_branch_coverage=1 00:37:29.670 --rc genhtml_function_coverage=1 00:37:29.670 --rc genhtml_legend=1 00:37:29.670 --rc geninfo_all_blocks=1 00:37:29.670 --rc geninfo_unexecuted_blocks=1 00:37:29.670 00:37:29.670 ' 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:29.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.670 --rc genhtml_branch_coverage=1 00:37:29.670 --rc genhtml_function_coverage=1 00:37:29.670 --rc genhtml_legend=1 00:37:29.670 --rc geninfo_all_blocks=1 00:37:29.670 --rc geninfo_unexecuted_blocks=1 00:37:29.670 00:37:29.670 ' 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:29.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.670 --rc genhtml_branch_coverage=1 00:37:29.670 --rc genhtml_function_coverage=1 00:37:29.670 --rc genhtml_legend=1 00:37:29.670 --rc geninfo_all_blocks=1 00:37:29.670 --rc geninfo_unexecuted_blocks=1 00:37:29.670 00:37:29.670 ' 00:37:29.670 05:28:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:37:29.670 05:28:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:37:29.670 05:28:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:37:29.670 05:28:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.670 05:28:16 event -- common/autotest_common.sh@10 -- # set +x 00:37:29.670 ************************************ 00:37:29.670 START TEST event_perf 00:37:29.670 ************************************ 00:37:29.670 05:28:16 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:37:29.670 Running I/O for 1 seconds...[2024-12-09 05:28:16.631440] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:29.670 [2024-12-09 05:28:16.631549] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 00:37:29.949 [2024-12-09 05:28:16.789719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:29.949 [2024-12-09 05:28:16.834047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.949 [2024-12-09 05:28:16.834204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:29.949 [2024-12-09 05:28:16.834298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:29.949 [2024-12-09 05:28:16.834307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.388 Running I/O for 1 seconds... 00:37:31.388 lcore 0: 181291 00:37:31.388 lcore 1: 181290 00:37:31.388 lcore 2: 181292 00:37:31.388 lcore 3: 181294 00:37:31.388 done. 00:37:31.388 00:37:31.388 real 0m1.314s 00:37:31.388 user 0m4.146s 00:37:31.388 sys 0m0.042s 00:37:31.388 05:28:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:31.388 ************************************ 00:37:31.388 END TEST event_perf 00:37:31.388 05:28:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:37:31.388 ************************************ 00:37:31.388 05:28:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:37:31.388 05:28:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:31.388 05:28:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:31.388 05:28:17 event -- common/autotest_common.sh@10 -- # set +x 00:37:31.388 ************************************ 00:37:31.388 START TEST event_reactor 00:37:31.388 ************************************ 00:37:31.388 05:28:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:37:31.388 [2024-12-09 05:28:18.000095] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:31.388 [2024-12-09 05:28:18.000184] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57978 ] 00:37:31.388 [2024-12-09 05:28:18.157298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.388 [2024-12-09 05:28:18.188793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.324 test_start 00:37:32.324 oneshot 00:37:32.324 tick 100 00:37:32.324 tick 100 00:37:32.324 tick 250 00:37:32.324 tick 100 00:37:32.324 tick 100 00:37:32.324 tick 100 00:37:32.324 tick 250 00:37:32.324 tick 500 00:37:32.324 tick 100 00:37:32.324 tick 100 00:37:32.324 tick 250 00:37:32.324 tick 100 00:37:32.324 tick 100 00:37:32.324 test_end 00:37:32.324 00:37:32.324 real 0m1.303s 00:37:32.324 user 0m1.148s 00:37:32.324 sys 0m0.047s 00:37:32.324 05:28:19 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.324 05:28:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:37:32.324 ************************************ 00:37:32.324 END TEST event_reactor 00:37:32.324 ************************************ 00:37:32.583 05:28:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:37:32.583 05:28:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:32.583 05:28:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:32.583 05:28:19 event -- common/autotest_common.sh@10 -- # set +x 00:37:32.583 ************************************ 00:37:32.583 START TEST event_reactor_perf 00:37:32.583 ************************************ 00:37:32.583 05:28:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:37:32.583 [2024-12-09 05:28:19.359287] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:32.583 [2024-12-09 05:28:19.359379] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58013 ] 00:37:32.583 [2024-12-09 05:28:19.509551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.583 [2024-12-09 05:28:19.540761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.964 test_start 00:37:33.964 test_end 00:37:33.964 Performance: 396276 events per second 00:37:33.964 00:37:33.964 real 0m1.285s 00:37:33.964 user 0m1.142s 00:37:33.964 sys 0m0.037s 00:37:33.964 05:28:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.964 05:28:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:37:33.964 ************************************ 00:37:33.964 END TEST event_reactor_perf 00:37:33.964 ************************************ 00:37:33.964 05:28:20 event -- event/event.sh@49 -- # uname -s 00:37:33.964 05:28:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:37:33.964 05:28:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:37:33.964 05:28:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:33.964 05:28:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.964 05:28:20 event -- common/autotest_common.sh@10 -- # set +x 00:37:33.964 ************************************ 00:37:33.964 START TEST event_scheduler 00:37:33.964 ************************************ 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:37:33.964 * Looking for test storage... 00:37:33.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:33.964 05:28:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.964 --rc genhtml_branch_coverage=1 00:37:33.964 --rc genhtml_function_coverage=1 00:37:33.964 --rc genhtml_legend=1 00:37:33.964 --rc geninfo_all_blocks=1 00:37:33.964 --rc geninfo_unexecuted_blocks=1 00:37:33.964 00:37:33.964 ' 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.964 --rc genhtml_branch_coverage=1 00:37:33.964 --rc genhtml_function_coverage=1 00:37:33.964 --rc genhtml_legend=1 00:37:33.964 --rc geninfo_all_blocks=1 00:37:33.964 --rc geninfo_unexecuted_blocks=1 00:37:33.964 00:37:33.964 ' 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.964 --rc genhtml_branch_coverage=1 00:37:33.964 --rc genhtml_function_coverage=1 00:37:33.964 --rc genhtml_legend=1 00:37:33.964 --rc geninfo_all_blocks=1 00:37:33.964 --rc geninfo_unexecuted_blocks=1 00:37:33.964 00:37:33.964 ' 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.964 --rc genhtml_branch_coverage=1 00:37:33.964 --rc genhtml_function_coverage=1 00:37:33.964 --rc genhtml_legend=1 00:37:33.964 --rc geninfo_all_blocks=1 00:37:33.964 --rc geninfo_unexecuted_blocks=1 00:37:33.964 00:37:33.964 ' 00:37:33.964 05:28:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:37:33.964 05:28:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58083 00:37:33.964 05:28:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:37:33.964 05:28:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58083 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58083 ']' 00:37:33.964 05:28:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.964 05:28:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:37:33.964 [2024-12-09 05:28:20.924083] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:33.965 [2024-12-09 05:28:20.924211] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58083 ] 00:37:34.224 [2024-12-09 05:28:21.079887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:34.224 [2024-12-09 05:28:21.124822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.224 [2024-12-09 05:28:21.124975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.224 [2024-12-09 05:28:21.125119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:34.224 [2024-12-09 05:28:21.125125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:37:34.224 05:28:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:37:34.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:37:34.224 POWER: Cannot set governor of lcore 0 to userspace 00:37:34.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:37:34.224 POWER: Cannot set governor of lcore 0 to performance 00:37:34.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:37:34.224 POWER: Cannot set governor of lcore 0 to userspace 00:37:34.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:37:34.224 POWER: Cannot set governor of lcore 0 to userspace 00:37:34.224 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:37:34.224 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:37:34.224 POWER: Unable to set Power Management Environment for lcore 0 00:37:34.224 [2024-12-09 05:28:21.182110] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:37:34.224 [2024-12-09 05:28:21.182125] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:37:34.224 [2024-12-09 05:28:21.182135] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:37:34.224 [2024-12-09 05:28:21.182150] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:37:34.224 [2024-12-09 05:28:21.182159] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:37:34.224 [2024-12-09 05:28:21.182167] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.224 05:28:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.224 05:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:37:34.484 [2024-12-09 05:28:21.219986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:34.484 [2024-12-09 05:28:21.239146] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:37:34.484 05:28:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.484 05:28:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:37:34.484 05:28:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:34.484 05:28:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.484 05:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:37:34.484 ************************************ 00:37:34.484 START TEST scheduler_create_thread 00:37:34.484 ************************************ 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.484 2 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.484 3 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.484 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 4 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 5 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 6 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 7 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 8 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 9 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 10 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.485 05:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:35.864 05:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.865 05:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:37:35.865 05:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:37:35.865 05:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.865 05:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:37.245 05:28:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.245 00:37:37.245 real 0m2.615s 00:37:37.245 user 0m0.014s 00:37:37.245 sys 0m0.009s 00:37:37.245 05:28:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.245 05:28:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:37:37.245 ************************************ 00:37:37.245 END TEST scheduler_create_thread 00:37:37.246 ************************************ 00:37:37.246 05:28:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:37:37.246 05:28:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58083 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58083 ']' 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58083 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58083 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:37:37.246 killing process with pid 58083 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58083' 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58083 00:37:37.246 05:28:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58083 00:37:37.505 [2024-12-09 05:28:24.346831] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:37:37.765 00:37:37.765 real 0m3.858s 00:37:37.765 user 0m5.674s 00:37:37.765 sys 0m0.327s 00:37:37.765 05:28:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.765 ************************************ 00:37:37.765 END TEST event_scheduler 00:37:37.765 ************************************ 00:37:37.765 05:28:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:37:37.765 05:28:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:37:37.765 05:28:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:37:37.765 05:28:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.765 05:28:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.765 05:28:24 event -- common/autotest_common.sh@10 -- # set +x 00:37:37.765 ************************************ 00:37:37.765 START TEST app_repeat 00:37:37.765 ************************************ 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58174 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:37:37.765 Process app_repeat pid: 58174 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58174' 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:37:37.765 spdk_app_start Round 0 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:37:37.765 05:28:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58174 /var/tmp/spdk-nbd.sock 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58174 ']' 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.765 05:28:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:37:37.765 [2024-12-09 05:28:24.627000] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:37.765 [2024-12-09 05:28:24.627101] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58174 ] 00:37:38.025 [2024-12-09 05:28:24.774050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:38.025 [2024-12-09 05:28:24.816829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.025 [2024-12-09 05:28:24.816845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.025 [2024-12-09 05:28:24.855802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:38.025 05:28:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.025 05:28:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:37:38.025 05:28:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:37:38.282 Malloc0 00:37:38.282 05:28:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:37:38.565 Malloc1 00:37:38.565 05:28:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:38.565 05:28:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:37:38.842 /dev/nbd0 00:37:38.842 05:28:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:38.842 05:28:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:38.842 05:28:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:37:39.099 1+0 records in 00:37:39.099 1+0 records out 00:37:39.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300663 s, 13.6 MB/s 00:37:39.099 05:28:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:39.099 05:28:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:37:39.099 05:28:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:39.099 05:28:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:39.099 05:28:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:37:39.099 05:28:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:39.099 05:28:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:39.099 05:28:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:37:39.357 /dev/nbd1 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:37:39.357 1+0 records in 00:37:39.357 1+0 records out 00:37:39.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310949 s, 13.2 MB/s 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:39.357 05:28:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:39.357 05:28:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:39.614 { 00:37:39.614 "nbd_device": "/dev/nbd0", 00:37:39.614 "bdev_name": "Malloc0" 00:37:39.614 }, 00:37:39.614 { 00:37:39.614 "nbd_device": "/dev/nbd1", 00:37:39.614 "bdev_name": "Malloc1" 00:37:39.614 } 00:37:39.614 ]' 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:39.614 { 00:37:39.614 "nbd_device": "/dev/nbd0", 00:37:39.614 "bdev_name": "Malloc0" 00:37:39.614 }, 00:37:39.614 { 00:37:39.614 "nbd_device": "/dev/nbd1", 00:37:39.614 "bdev_name": "Malloc1" 00:37:39.614 } 00:37:39.614 ]' 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:37:39.614 /dev/nbd1' 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:37:39.614 /dev/nbd1' 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:39.614 05:28:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:37:39.615 256+0 records in 00:37:39.615 256+0 records out 00:37:39.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704873 s, 149 MB/s 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:39.615 256+0 records in 00:37:39.615 256+0 records out 00:37:39.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230914 s, 45.4 MB/s 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:37:39.615 256+0 records in 00:37:39.615 256+0 records out 00:37:39.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258815 s, 40.5 MB/s 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:39.615 05:28:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:40.180 05:28:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:40.180 05:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:40.437 05:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:40.438 05:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:40.438 05:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:40.694 05:28:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:37:40.694 05:28:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:37:40.952 05:28:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:37:40.952 [2024-12-09 05:28:27.908629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:41.222 [2024-12-09 05:28:27.939806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.222 [2024-12-09 05:28:27.939816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.222 [2024-12-09 05:28:27.969749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:41.222 [2024-12-09 05:28:27.969837] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:37:41.222 [2024-12-09 05:28:27.969849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:37:44.504 spdk_app_start Round 1 00:37:44.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:44.504 05:28:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:37:44.504 05:28:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:37:44.504 05:28:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58174 /var/tmp/spdk-nbd.sock 00:37:44.504 05:28:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58174 ']' 00:37:44.504 05:28:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:44.504 05:28:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:44.504 05:28:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:44.504 05:28:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:44.504 05:28:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:37:44.504 05:28:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:44.504 05:28:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:37:44.504 05:28:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:37:44.504 Malloc0 00:37:44.504 05:28:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:37:44.763 Malloc1 00:37:44.763 05:28:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:44.763 05:28:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:37:45.022 /dev/nbd0 00:37:45.022 05:28:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:45.022 05:28:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:37:45.022 1+0 records in 00:37:45.022 1+0 records out 00:37:45.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252291 s, 16.2 MB/s 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:45.022 05:28:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:37:45.022 05:28:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:45.022 05:28:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:45.022 05:28:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:37:45.281 /dev/nbd1 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:37:45.281 1+0 records in 00:37:45.281 1+0 records out 00:37:45.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313834 s, 13.1 MB/s 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:45.281 05:28:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:45.281 05:28:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:45.850 { 00:37:45.850 "nbd_device": "/dev/nbd0", 00:37:45.850 "bdev_name": "Malloc0" 00:37:45.850 }, 00:37:45.850 { 00:37:45.850 "nbd_device": "/dev/nbd1", 00:37:45.850 "bdev_name": "Malloc1" 00:37:45.850 } 00:37:45.850 ]' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:45.850 { 00:37:45.850 "nbd_device": "/dev/nbd0", 00:37:45.850 "bdev_name": "Malloc0" 00:37:45.850 }, 00:37:45.850 { 00:37:45.850 "nbd_device": "/dev/nbd1", 00:37:45.850 "bdev_name": "Malloc1" 00:37:45.850 } 00:37:45.850 ]' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:37:45.850 /dev/nbd1' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:37:45.850 /dev/nbd1' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:37:45.850 256+0 records in 00:37:45.850 256+0 records out 00:37:45.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105171 s, 99.7 MB/s 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:45.850 256+0 records in 00:37:45.850 256+0 records out 00:37:45.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223107 s, 47.0 MB/s 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:37:45.850 256+0 records in 00:37:45.850 256+0 records out 00:37:45.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254327 s, 41.2 MB/s 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:45.850 05:28:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:46.109 05:28:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:46.368 05:28:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:46.627 05:28:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:46.886 05:28:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:37:46.886 05:28:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:37:47.144 05:28:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:37:47.144 [2024-12-09 05:28:34.047695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:47.144 [2024-12-09 05:28:34.076784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.144 [2024-12-09 05:28:34.076796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.144 [2024-12-09 05:28:34.108536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:47.144 [2024-12-09 05:28:34.108665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:37:47.144 [2024-12-09 05:28:34.108678] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:37:50.426 spdk_app_start Round 2 00:37:50.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:50.426 05:28:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:37:50.426 05:28:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:37:50.426 05:28:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58174 /var/tmp/spdk-nbd.sock 00:37:50.426 05:28:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58174 ']' 00:37:50.426 05:28:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:50.426 05:28:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.426 05:28:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:50.426 05:28:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.426 05:28:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:37:50.426 05:28:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:50.426 05:28:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:37:50.426 05:28:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:37:50.684 Malloc0 00:37:50.684 05:28:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:37:50.941 Malloc1 00:37:50.941 05:28:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:50.941 05:28:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:37:51.199 /dev/nbd0 00:37:51.199 05:28:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:51.199 05:28:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:37:51.199 1+0 records in 00:37:51.199 1+0 records out 00:37:51.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289222 s, 14.2 MB/s 00:37:51.199 05:28:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:51.200 05:28:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:37:51.200 05:28:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:51.200 05:28:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:51.200 05:28:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:37:51.200 05:28:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:51.200 05:28:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:51.200 05:28:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:37:51.767 /dev/nbd1 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:37:51.767 1+0 records in 00:37:51.767 1+0 records out 00:37:51.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332349 s, 12.3 MB/s 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:51.767 05:28:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:51.767 05:28:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:52.026 { 00:37:52.026 "nbd_device": "/dev/nbd0", 00:37:52.026 "bdev_name": "Malloc0" 00:37:52.026 }, 00:37:52.026 { 00:37:52.026 "nbd_device": "/dev/nbd1", 00:37:52.026 "bdev_name": "Malloc1" 00:37:52.026 } 00:37:52.026 ]' 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:52.026 { 00:37:52.026 "nbd_device": "/dev/nbd0", 00:37:52.026 "bdev_name": "Malloc0" 00:37:52.026 }, 00:37:52.026 { 00:37:52.026 "nbd_device": "/dev/nbd1", 00:37:52.026 "bdev_name": "Malloc1" 00:37:52.026 } 00:37:52.026 ]' 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:37:52.026 /dev/nbd1' 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:37:52.026 /dev/nbd1' 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:52.026 05:28:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:37:52.027 256+0 records in 00:37:52.027 256+0 records out 00:37:52.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108811 s, 96.4 MB/s 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:52.027 256+0 records in 00:37:52.027 256+0 records out 00:37:52.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261186 s, 40.1 MB/s 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:37:52.027 256+0 records in 00:37:52.027 256+0 records out 00:37:52.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267885 s, 39.1 MB/s 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:37:52.027 05:28:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:52.286 05:28:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:37:52.286 05:28:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:52.286 05:28:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:52.545 05:28:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:52.804 05:28:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:53.063 05:28:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:53.063 05:28:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:53.063 05:28:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:53.063 05:28:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:37:53.063 05:28:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:37:53.322 05:28:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:37:53.582 [2024-12-09 05:28:40.387528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:53.582 [2024-12-09 05:28:40.420358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.582 [2024-12-09 05:28:40.420370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.582 [2024-12-09 05:28:40.453022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:53.582 [2024-12-09 05:28:40.453125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:37:53.582 [2024-12-09 05:28:40.453137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:37:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:56.869 05:28:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58174 /var/tmp/spdk-nbd.sock 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58174 ']' 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:37:56.869 05:28:43 event.app_repeat -- event/event.sh@39 -- # killprocess 58174 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58174 ']' 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58174 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58174 00:37:56.869 killing process with pid 58174 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58174' 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58174 00:37:56.869 05:28:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58174 00:37:56.869 spdk_app_start is called in Round 0. 00:37:56.869 Shutdown signal received, stop current app iteration 00:37:56.869 Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 reinitialization... 00:37:56.869 spdk_app_start is called in Round 1. 00:37:56.869 Shutdown signal received, stop current app iteration 00:37:56.869 Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 reinitialization... 00:37:56.869 spdk_app_start is called in Round 2. 00:37:56.869 Shutdown signal received, stop current app iteration 00:37:56.869 Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 reinitialization... 00:37:56.869 spdk_app_start is called in Round 3. 00:37:56.869 Shutdown signal received, stop current app iteration 00:37:56.869 05:28:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:37:56.869 05:28:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:37:56.869 00:37:56.869 real 0m19.142s 00:37:56.870 user 0m44.196s 00:37:56.870 sys 0m2.666s 00:37:56.870 05:28:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.870 05:28:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:37:56.870 ************************************ 00:37:56.870 END TEST app_repeat 00:37:56.870 ************************************ 00:37:56.870 05:28:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:37:56.870 05:28:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:37:56.870 05:28:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:56.870 05:28:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.870 05:28:43 event -- common/autotest_common.sh@10 -- # set +x 00:37:56.870 ************************************ 00:37:56.870 START TEST cpu_locks 00:37:56.870 ************************************ 00:37:56.870 05:28:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:37:57.129 * Looking for test storage... 00:37:57.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.129 05:28:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:57.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.129 --rc genhtml_branch_coverage=1 00:37:57.129 --rc genhtml_function_coverage=1 00:37:57.129 --rc genhtml_legend=1 00:37:57.129 --rc geninfo_all_blocks=1 00:37:57.129 --rc geninfo_unexecuted_blocks=1 00:37:57.129 00:37:57.129 ' 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:57.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.129 --rc genhtml_branch_coverage=1 00:37:57.129 --rc genhtml_function_coverage=1 00:37:57.129 --rc genhtml_legend=1 00:37:57.129 --rc geninfo_all_blocks=1 00:37:57.129 --rc geninfo_unexecuted_blocks=1 00:37:57.129 00:37:57.129 ' 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:57.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.129 --rc genhtml_branch_coverage=1 00:37:57.129 --rc genhtml_function_coverage=1 00:37:57.129 --rc genhtml_legend=1 00:37:57.129 --rc geninfo_all_blocks=1 00:37:57.129 --rc geninfo_unexecuted_blocks=1 00:37:57.129 00:37:57.129 ' 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:57.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.129 --rc genhtml_branch_coverage=1 00:37:57.129 --rc genhtml_function_coverage=1 00:37:57.129 --rc genhtml_legend=1 00:37:57.129 --rc geninfo_all_blocks=1 00:37:57.129 --rc geninfo_unexecuted_blocks=1 00:37:57.129 00:37:57.129 ' 00:37:57.129 05:28:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:37:57.129 05:28:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:37:57.129 05:28:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:37:57.129 05:28:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:57.129 05:28:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:37:57.129 ************************************ 00:37:57.129 START TEST default_locks 00:37:57.130 ************************************ 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58608 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58608 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58608 ']' 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:57.130 05:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:37:57.130 [2024-12-09 05:28:44.023368] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:57.130 [2024-12-09 05:28:44.023726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58608 ] 00:37:57.389 [2024-12-09 05:28:44.170600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.389 [2024-12-09 05:28:44.207692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.390 [2024-12-09 05:28:44.252997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:57.649 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.649 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:37:57.649 05:28:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58608 00:37:57.649 05:28:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58608 00:37:57.649 05:28:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58608 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58608 ']' 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58608 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58608 00:37:57.908 killing process with pid 58608 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58608' 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58608 00:37:57.908 05:28:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58608 00:37:58.168 05:28:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58608 00:37:58.168 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:37:58.168 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58608 00:37:58.168 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58608 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58608 ']' 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.427 ERROR: process (pid: 58608) is no longer running 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:37:58.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58608) - No such process 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:37:58.427 00:37:58.427 real 0m1.175s 00:37:58.427 user 0m1.199s 00:37:58.427 sys 0m0.444s 00:37:58.427 ************************************ 00:37:58.427 END TEST default_locks 00:37:58.427 ************************************ 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.427 05:28:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:37:58.427 05:28:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:37:58.427 05:28:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:58.427 05:28:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.427 05:28:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:37:58.427 ************************************ 00:37:58.427 START TEST default_locks_via_rpc 00:37:58.427 ************************************ 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58653 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58653 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58653 ']' 00:37:58.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.427 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.427 [2024-12-09 05:28:45.267218] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:58.427 [2024-12-09 05:28:45.267339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58653 ] 00:37:58.685 [2024-12-09 05:28:45.418557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.685 [2024-12-09 05:28:45.453083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.685 [2024-12-09 05:28:45.492563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58653 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58653 00:37:58.685 05:28:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58653 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58653 ']' 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58653 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58653 00:37:59.251 killing process with pid 58653 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58653' 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58653 00:37:59.251 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58653 00:37:59.509 00:37:59.509 real 0m1.232s 00:37:59.509 user 0m1.329s 00:37:59.509 sys 0m0.425s 00:37:59.509 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:59.509 05:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:59.509 ************************************ 00:37:59.509 END TEST default_locks_via_rpc 00:37:59.509 ************************************ 00:37:59.509 05:28:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:37:59.509 05:28:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:59.509 05:28:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:59.509 05:28:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:37:59.766 ************************************ 00:37:59.766 START TEST non_locking_app_on_locked_coremask 00:37:59.766 ************************************ 00:37:59.766 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:37:59.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.766 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58696 00:37:59.766 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:59.766 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58696 /var/tmp/spdk.sock 00:37:59.767 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58696 ']' 00:37:59.767 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.767 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.767 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.767 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.767 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:37:59.767 [2024-12-09 05:28:46.551003] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:59.767 [2024-12-09 05:28:46.551400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58696 ] 00:37:59.767 [2024-12-09 05:28:46.701541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.767 [2024-12-09 05:28:46.731470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.024 [2024-12-09 05:28:46.770113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58699 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58699 /var/tmp/spdk2.sock 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58699 ']' 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:00.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.024 05:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:00.024 [2024-12-09 05:28:46.965356] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:00.024 [2024-12-09 05:28:46.965944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58699 ] 00:38:00.282 [2024-12-09 05:28:47.138381] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:38:00.282 [2024-12-09 05:28:47.138425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.282 [2024-12-09 05:28:47.200345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.540 [2024-12-09 05:28:47.278692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:01.106 05:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.106 05:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:01.106 05:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58696 00:38:01.106 05:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:38:01.106 05:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58696 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58696 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58696 ']' 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58696 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58696 00:38:02.040 killing process with pid 58696 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58696' 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58696 00:38:02.040 05:28:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58696 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58699 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58699 ']' 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58699 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58699 00:38:02.608 killing process with pid 58699 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58699' 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58699 00:38:02.608 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58699 00:38:02.887 ************************************ 00:38:02.887 END TEST non_locking_app_on_locked_coremask 00:38:02.887 ************************************ 00:38:02.887 00:38:02.887 real 0m3.261s 00:38:02.887 user 0m3.756s 00:38:02.887 sys 0m0.909s 00:38:02.887 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:02.887 05:28:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:02.887 05:28:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:38:02.887 05:28:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:02.887 05:28:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:02.887 05:28:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:02.887 ************************************ 00:38:02.887 START TEST locking_app_on_unlocked_coremask 00:38:02.887 ************************************ 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58766 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58766 /var/tmp/spdk.sock 00:38:02.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58766 ']' 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:02.887 05:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:03.152 [2024-12-09 05:28:49.847573] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:03.152 [2024-12-09 05:28:49.847697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ] 00:38:03.152 [2024-12-09 05:28:49.989457] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:38:03.152 [2024-12-09 05:28:49.989832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.152 [2024-12-09 05:28:50.039986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.152 [2024-12-09 05:28:50.093130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:03.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58775 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58775 /var/tmp/spdk2.sock 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58775 ']' 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:03.412 05:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:03.412 [2024-12-09 05:28:50.305472] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:03.412 [2024-12-09 05:28:50.305972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58775 ] 00:38:03.672 [2024-12-09 05:28:50.478597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.672 [2024-12-09 05:28:50.546001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.672 [2024-12-09 05:28:50.624451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:04.610 05:28:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:04.610 05:28:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:04.610 05:28:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58775 00:38:04.610 05:28:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58775 00:38:04.610 05:28:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58766 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58766 ']' 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58766 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58766 00:38:05.545 killing process with pid 58766 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58766' 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58766 00:38:05.545 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58766 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58775 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58775 ']' 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58775 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58775 00:38:06.114 killing process with pid 58775 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58775' 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58775 00:38:06.114 05:28:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58775 00:38:06.373 00:38:06.373 real 0m3.372s 00:38:06.373 user 0m3.966s 00:38:06.373 sys 0m0.926s 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:06.373 ************************************ 00:38:06.373 END TEST locking_app_on_unlocked_coremask 00:38:06.373 ************************************ 00:38:06.373 05:28:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:38:06.373 05:28:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:06.373 05:28:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.373 05:28:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:06.373 ************************************ 00:38:06.373 START TEST locking_app_on_locked_coremask 00:38:06.373 ************************************ 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:38:06.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58836 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58836 /var/tmp/spdk.sock 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58836 ']' 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.373 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:06.373 [2024-12-09 05:28:53.278720] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:06.373 [2024-12-09 05:28:53.278805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58836 ] 00:38:06.632 [2024-12-09 05:28:53.424757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.632 [2024-12-09 05:28:53.459377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.632 [2024-12-09 05:28:53.502533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58850 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58850 /var/tmp/spdk2.sock 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58850 /var/tmp/spdk2.sock 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58850 /var/tmp/spdk2.sock 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58850 ']' 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:06.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.891 05:28:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:06.891 [2024-12-09 05:28:53.711727] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:06.891 [2024-12-09 05:28:53.712075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:38:07.150 [2024-12-09 05:28:53.885131] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58836 has claimed it. 00:38:07.150 [2024-12-09 05:28:53.885246] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:38:07.718 ERROR: process (pid: 58850) is no longer running 00:38:07.718 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58850) - No such process 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58836 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58836 00:38:07.718 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58836 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58836 ']' 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58836 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58836 00:38:07.978 killing process with pid 58836 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58836' 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58836 00:38:07.978 05:28:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58836 00:38:08.545 ************************************ 00:38:08.545 END TEST locking_app_on_locked_coremask 00:38:08.545 ************************************ 00:38:08.545 00:38:08.545 real 0m2.016s 00:38:08.545 user 0m2.397s 00:38:08.545 sys 0m0.523s 00:38:08.545 05:28:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.545 05:28:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:08.545 05:28:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:38:08.545 05:28:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:08.545 05:28:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.545 05:28:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:08.545 ************************************ 00:38:08.545 START TEST locking_overlapped_coremask 00:38:08.545 ************************************ 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58895 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58895 /var/tmp/spdk.sock 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58895 ']' 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.545 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.546 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:08.546 05:28:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:38:08.546 [2024-12-09 05:28:55.345753] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:08.546 [2024-12-09 05:28:55.345857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:38:08.546 [2024-12-09 05:28:55.498843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:08.804 [2024-12-09 05:28:55.536804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.804 [2024-12-09 05:28:55.536935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.804 [2024-12-09 05:28:55.536937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.804 [2024-12-09 05:28:55.580068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58906 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58906 /var/tmp/spdk2.sock 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58906 /var/tmp/spdk2.sock 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58906 /var/tmp/spdk2.sock 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58906 ']' 00:38:08.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.804 05:28:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:09.062 [2024-12-09 05:28:55.788626] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:09.062 [2024-12-09 05:28:55.788751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58906 ] 00:38:09.062 [2024-12-09 05:28:55.963533] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58895 has claimed it. 00:38:09.062 [2024-12-09 05:28:55.963825] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:38:09.627 ERROR: process (pid: 58906) is no longer running 00:38:09.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58906) - No such process 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58895 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58895 ']' 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58895 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.627 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58895 00:38:09.884 killing process with pid 58895 00:38:09.884 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:09.884 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:09.884 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58895' 00:38:09.884 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58895 00:38:09.884 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58895 00:38:10.141 ************************************ 00:38:10.141 END TEST locking_overlapped_coremask 00:38:10.141 ************************************ 00:38:10.141 00:38:10.141 real 0m1.629s 00:38:10.141 user 0m4.468s 00:38:10.141 sys 0m0.323s 00:38:10.141 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:10.142 05:28:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:38:10.142 05:28:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:10.142 05:28:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:10.142 05:28:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:10.142 ************************************ 00:38:10.142 START TEST locking_overlapped_coremask_via_rpc 00:38:10.142 ************************************ 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:38:10.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58946 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58946 /var/tmp/spdk.sock 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58946 ']' 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.142 05:28:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:10.142 [2024-12-09 05:28:57.021642] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:10.142 [2024-12-09 05:28:57.022324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58946 ] 00:38:10.399 [2024-12-09 05:28:57.177287] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:38:10.399 [2024-12-09 05:28:57.177336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:10.399 [2024-12-09 05:28:57.213509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.399 [2024-12-09 05:28:57.213643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:10.399 [2024-12-09 05:28:57.213647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.399 [2024-12-09 05:28:57.257468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:10.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58957 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58957 /var/tmp/spdk2.sock 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58957 ']' 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.657 05:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:10.657 [2024-12-09 05:28:57.467475] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:10.657 [2024-12-09 05:28:57.467593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:38:10.914 [2024-12-09 05:28:57.638136] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:38:10.914 [2024-12-09 05:28:57.642570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:10.914 [2024-12-09 05:28:57.707649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:10.914 [2024-12-09 05:28:57.709660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:10.914 [2024-12-09 05:28:57.709662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:10.914 [2024-12-09 05:28:57.785442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:11.845 [2024-12-09 05:28:58.556716] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58946 has claimed it. 00:38:11.845 request: 00:38:11.845 { 00:38:11.845 "method": "framework_enable_cpumask_locks", 00:38:11.845 "req_id": 1 00:38:11.845 } 00:38:11.845 Got JSON-RPC error response 00:38:11.845 response: 00:38:11.845 { 00:38:11.845 "code": -32603, 00:38:11.845 "message": "Failed to claim CPU core: 2" 00:38:11.845 } 00:38:11.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58946 /var/tmp/spdk.sock 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58946 ']' 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.845 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58957 /var/tmp/spdk2.sock 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58957 ']' 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:12.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.103 05:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:38:12.361 00:38:12.361 real 0m2.144s 00:38:12.361 user 0m1.269s 00:38:12.361 sys 0m0.158s 00:38:12.361 ************************************ 00:38:12.361 END TEST locking_overlapped_coremask_via_rpc 00:38:12.361 ************************************ 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.361 05:28:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:12.361 05:28:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:38:12.361 05:28:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58946 ]] 00:38:12.361 05:28:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58946 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58946 ']' 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58946 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58946 00:38:12.361 killing process with pid 58946 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58946' 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58946 00:38:12.361 05:28:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58946 00:38:12.619 05:28:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58957 ]] 00:38:12.619 05:28:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58957 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58957 ']' 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58957 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58957 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:12.619 killing process with pid 58957 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58957' 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58957 00:38:12.619 05:28:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58957 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:38:12.877 Process with pid 58946 is not found 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58946 ]] 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58946 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58946 ']' 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58946 00:38:12.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58946) - No such process 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58946 is not found' 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58957 ]] 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58957 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58957 ']' 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58957 00:38:12.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58957) - No such process 00:38:12.877 Process with pid 58957 is not found 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58957 is not found' 00:38:12.877 05:28:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:38:12.877 00:38:12.877 real 0m16.017s 00:38:12.877 user 0m29.354s 00:38:12.877 sys 0m4.377s 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.877 05:28:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:12.877 ************************************ 00:38:12.877 END TEST cpu_locks 00:38:12.877 ************************************ 00:38:12.877 ************************************ 00:38:12.877 END TEST event 00:38:12.877 ************************************ 00:38:12.877 00:38:12.877 real 0m43.428s 00:38:12.877 user 1m25.882s 00:38:12.877 sys 0m7.765s 00:38:12.877 05:28:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.877 05:28:59 event -- common/autotest_common.sh@10 -- # set +x 00:38:13.136 05:28:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:38:13.136 05:28:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:13.136 05:28:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.136 05:28:59 -- common/autotest_common.sh@10 -- # set +x 00:38:13.136 ************************************ 00:38:13.136 START TEST thread 00:38:13.136 ************************************ 00:38:13.136 05:28:59 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:38:13.136 * Looking for test storage... 00:38:13.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:38:13.136 05:28:59 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:13.136 05:28:59 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:13.136 05:28:59 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:13.136 05:29:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.136 05:29:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.136 05:29:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.136 05:29:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.136 05:29:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.136 05:29:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.136 05:29:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.136 05:29:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.136 05:29:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.136 05:29:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.136 05:29:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.136 05:29:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:38:13.136 05:29:00 thread -- scripts/common.sh@345 -- # : 1 00:38:13.136 05:29:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.136 05:29:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.136 05:29:00 thread -- scripts/common.sh@365 -- # decimal 1 00:38:13.136 05:29:00 thread -- scripts/common.sh@353 -- # local d=1 00:38:13.136 05:29:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.136 05:29:00 thread -- scripts/common.sh@355 -- # echo 1 00:38:13.136 05:29:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.136 05:29:00 thread -- scripts/common.sh@366 -- # decimal 2 00:38:13.136 05:29:00 thread -- scripts/common.sh@353 -- # local d=2 00:38:13.136 05:29:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.136 05:29:00 thread -- scripts/common.sh@355 -- # echo 2 00:38:13.136 05:29:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.136 05:29:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.136 05:29:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.136 05:29:00 thread -- scripts/common.sh@368 -- # return 0 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:13.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.136 --rc genhtml_branch_coverage=1 00:38:13.136 --rc genhtml_function_coverage=1 00:38:13.136 --rc genhtml_legend=1 00:38:13.136 --rc geninfo_all_blocks=1 00:38:13.136 --rc geninfo_unexecuted_blocks=1 00:38:13.136 00:38:13.136 ' 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:13.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.136 --rc genhtml_branch_coverage=1 00:38:13.136 --rc genhtml_function_coverage=1 00:38:13.136 --rc genhtml_legend=1 00:38:13.136 --rc geninfo_all_blocks=1 00:38:13.136 --rc geninfo_unexecuted_blocks=1 00:38:13.136 00:38:13.136 ' 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:13.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.136 --rc genhtml_branch_coverage=1 00:38:13.136 --rc genhtml_function_coverage=1 00:38:13.136 --rc genhtml_legend=1 00:38:13.136 --rc geninfo_all_blocks=1 00:38:13.136 --rc geninfo_unexecuted_blocks=1 00:38:13.136 00:38:13.136 ' 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:13.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.136 --rc genhtml_branch_coverage=1 00:38:13.136 --rc genhtml_function_coverage=1 00:38:13.136 --rc genhtml_legend=1 00:38:13.136 --rc geninfo_all_blocks=1 00:38:13.136 --rc geninfo_unexecuted_blocks=1 00:38:13.136 00:38:13.136 ' 00:38:13.136 05:29:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.136 05:29:00 thread -- common/autotest_common.sh@10 -- # set +x 00:38:13.136 ************************************ 00:38:13.136 START TEST thread_poller_perf 00:38:13.136 ************************************ 00:38:13.136 05:29:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:38:13.394 [2024-12-09 05:29:00.106136] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:13.394 [2024-12-09 05:29:00.106419] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59087 ] 00:38:13.394 [2024-12-09 05:29:00.256059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.394 [2024-12-09 05:29:00.290168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.394 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:38:14.768 [2024-12-09T05:29:01.740Z] ====================================== 00:38:14.768 [2024-12-09T05:29:01.740Z] busy:2208204010 (cyc) 00:38:14.768 [2024-12-09T05:29:01.740Z] total_run_count: 306000 00:38:14.768 [2024-12-09T05:29:01.740Z] tsc_hz: 2200000000 (cyc) 00:38:14.768 [2024-12-09T05:29:01.741Z] ====================================== 00:38:14.769 [2024-12-09T05:29:01.741Z] poller_cost: 7216 (cyc), 3280 (nsec) 00:38:14.769 00:38:14.769 real 0m1.308s 00:38:14.769 user 0m1.164s 00:38:14.769 sys 0m0.035s 00:38:14.769 05:29:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.769 05:29:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:38:14.769 ************************************ 00:38:14.769 END TEST thread_poller_perf 00:38:14.769 ************************************ 00:38:14.769 05:29:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:38:14.769 05:29:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:38:14.769 05:29:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.769 05:29:01 thread -- common/autotest_common.sh@10 -- # set +x 00:38:14.769 ************************************ 00:38:14.769 START TEST thread_poller_perf 00:38:14.769 ************************************ 00:38:14.769 05:29:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:38:14.769 [2024-12-09 05:29:01.462446] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:14.769 [2024-12-09 05:29:01.462724] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:38:14.769 [2024-12-09 05:29:01.614667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.769 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:38:14.769 [2024-12-09 05:29:01.647988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.198 [2024-12-09T05:29:03.170Z] ====================================== 00:38:16.198 [2024-12-09T05:29:03.170Z] busy:2202110452 (cyc) 00:38:16.198 [2024-12-09T05:29:03.170Z] total_run_count: 3996000 00:38:16.198 [2024-12-09T05:29:03.170Z] tsc_hz: 2200000000 (cyc) 00:38:16.198 [2024-12-09T05:29:03.170Z] ====================================== 00:38:16.198 [2024-12-09T05:29:03.170Z] poller_cost: 551 (cyc), 250 (nsec) 00:38:16.198 00:38:16.198 real 0m1.296s 00:38:16.198 user 0m1.157s 00:38:16.198 sys 0m0.031s 00:38:16.198 05:29:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.198 ************************************ 00:38:16.198 END TEST thread_poller_perf 00:38:16.198 ************************************ 00:38:16.198 05:29:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:38:16.198 05:29:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:38:16.198 00:38:16.198 real 0m2.890s 00:38:16.198 user 0m2.476s 00:38:16.198 sys 0m0.192s 00:38:16.198 ************************************ 00:38:16.198 END TEST thread 00:38:16.198 ************************************ 00:38:16.198 05:29:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.198 05:29:02 thread -- common/autotest_common.sh@10 -- # set +x 00:38:16.198 05:29:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:38:16.198 05:29:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:38:16.198 05:29:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:16.198 05:29:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.198 05:29:02 -- common/autotest_common.sh@10 -- # set +x 00:38:16.198 ************************************ 00:38:16.198 START TEST app_cmdline 00:38:16.198 ************************************ 00:38:16.198 05:29:02 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:38:16.198 * Looking for test storage... 00:38:16.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:38:16.198 05:29:02 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:16.198 05:29:02 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:38:16.198 05:29:02 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:16.198 05:29:02 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:38:16.198 05:29:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:38:16.199 05:29:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:16.199 05:29:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:16.199 05:29:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:16.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.199 --rc genhtml_branch_coverage=1 00:38:16.199 --rc genhtml_function_coverage=1 00:38:16.199 --rc genhtml_legend=1 00:38:16.199 --rc geninfo_all_blocks=1 00:38:16.199 --rc geninfo_unexecuted_blocks=1 00:38:16.199 00:38:16.199 ' 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:16.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.199 --rc genhtml_branch_coverage=1 00:38:16.199 --rc genhtml_function_coverage=1 00:38:16.199 --rc genhtml_legend=1 00:38:16.199 --rc geninfo_all_blocks=1 00:38:16.199 --rc geninfo_unexecuted_blocks=1 00:38:16.199 00:38:16.199 ' 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:16.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.199 --rc genhtml_branch_coverage=1 00:38:16.199 --rc genhtml_function_coverage=1 00:38:16.199 --rc genhtml_legend=1 00:38:16.199 --rc geninfo_all_blocks=1 00:38:16.199 --rc geninfo_unexecuted_blocks=1 00:38:16.199 00:38:16.199 ' 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:16.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.199 --rc genhtml_branch_coverage=1 00:38:16.199 --rc genhtml_function_coverage=1 00:38:16.199 --rc genhtml_legend=1 00:38:16.199 --rc geninfo_all_blocks=1 00:38:16.199 --rc geninfo_unexecuted_blocks=1 00:38:16.199 00:38:16.199 ' 00:38:16.199 05:29:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:38:16.199 05:29:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59205 00:38:16.199 05:29:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59205 00:38:16.199 05:29:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59205 ']' 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.199 05:29:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:38:16.199 [2024-12-09 05:29:03.096585] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:16.199 [2024-12-09 05:29:03.096685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:38:16.456 [2024-12-09 05:29:03.250392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.456 [2024-12-09 05:29:03.289132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.456 [2024-12-09 05:29:03.334299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:17.391 05:29:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.391 05:29:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:38:17.391 05:29:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:38:17.648 { 00:38:17.648 "version": "SPDK v25.01-pre git sha1 afe42438a", 00:38:17.648 "fields": { 00:38:17.648 "major": 25, 00:38:17.648 "minor": 1, 00:38:17.648 "patch": 0, 00:38:17.648 "suffix": "-pre", 00:38:17.648 "commit": "afe42438a" 00:38:17.648 } 00:38:17.648 } 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:38:17.648 05:29:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.648 05:29:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:17.649 05:29:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.649 05:29:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:17.649 05:29:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.649 05:29:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:17.649 05:29:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:17.649 05:29:04 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:38:17.907 request: 00:38:17.907 { 00:38:17.907 "method": "env_dpdk_get_mem_stats", 00:38:17.907 "req_id": 1 00:38:17.907 } 00:38:17.907 Got JSON-RPC error response 00:38:17.907 response: 00:38:17.907 { 00:38:17.907 "code": -32601, 00:38:17.907 "message": "Method not found" 00:38:17.907 } 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:17.907 05:29:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59205 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59205 ']' 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59205 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59205 00:38:17.907 killing process with pid 59205 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59205' 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 59205 00:38:17.907 05:29:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 59205 00:38:18.166 00:38:18.166 real 0m2.241s 00:38:18.166 user 0m2.966s 00:38:18.166 sys 0m0.410s 00:38:18.166 05:29:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.166 05:29:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:38:18.166 ************************************ 00:38:18.166 END TEST app_cmdline 00:38:18.166 ************************************ 00:38:18.166 05:29:05 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:38:18.166 05:29:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.166 05:29:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.166 05:29:05 -- common/autotest_common.sh@10 -- # set +x 00:38:18.166 ************************************ 00:38:18.166 START TEST version 00:38:18.166 ************************************ 00:38:18.166 05:29:05 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:38:18.424 * Looking for test storage... 00:38:18.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1693 -- # lcov --version 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:18.424 05:29:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.424 05:29:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.424 05:29:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.424 05:29:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.424 05:29:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.424 05:29:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.424 05:29:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.424 05:29:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.424 05:29:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.424 05:29:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.424 05:29:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.424 05:29:05 version -- scripts/common.sh@344 -- # case "$op" in 00:38:18.424 05:29:05 version -- scripts/common.sh@345 -- # : 1 00:38:18.424 05:29:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.424 05:29:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.424 05:29:05 version -- scripts/common.sh@365 -- # decimal 1 00:38:18.424 05:29:05 version -- scripts/common.sh@353 -- # local d=1 00:38:18.424 05:29:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.424 05:29:05 version -- scripts/common.sh@355 -- # echo 1 00:38:18.424 05:29:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.424 05:29:05 version -- scripts/common.sh@366 -- # decimal 2 00:38:18.424 05:29:05 version -- scripts/common.sh@353 -- # local d=2 00:38:18.424 05:29:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.424 05:29:05 version -- scripts/common.sh@355 -- # echo 2 00:38:18.424 05:29:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.424 05:29:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.424 05:29:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.424 05:29:05 version -- scripts/common.sh@368 -- # return 0 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.424 --rc genhtml_branch_coverage=1 00:38:18.424 --rc genhtml_function_coverage=1 00:38:18.424 --rc genhtml_legend=1 00:38:18.424 --rc geninfo_all_blocks=1 00:38:18.424 --rc geninfo_unexecuted_blocks=1 00:38:18.424 00:38:18.424 ' 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.424 --rc genhtml_branch_coverage=1 00:38:18.424 --rc genhtml_function_coverage=1 00:38:18.424 --rc genhtml_legend=1 00:38:18.424 --rc geninfo_all_blocks=1 00:38:18.424 --rc geninfo_unexecuted_blocks=1 00:38:18.424 00:38:18.424 ' 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.424 --rc genhtml_branch_coverage=1 00:38:18.424 --rc genhtml_function_coverage=1 00:38:18.424 --rc genhtml_legend=1 00:38:18.424 --rc geninfo_all_blocks=1 00:38:18.424 --rc geninfo_unexecuted_blocks=1 00:38:18.424 00:38:18.424 ' 00:38:18.424 05:29:05 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.424 --rc genhtml_branch_coverage=1 00:38:18.424 --rc genhtml_function_coverage=1 00:38:18.424 --rc genhtml_legend=1 00:38:18.424 --rc geninfo_all_blocks=1 00:38:18.424 --rc geninfo_unexecuted_blocks=1 00:38:18.424 00:38:18.424 ' 00:38:18.424 05:29:05 version -- app/version.sh@17 -- # get_header_version major 00:38:18.424 05:29:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:38:18.424 05:29:05 version -- app/version.sh@14 -- # cut -f2 00:38:18.424 05:29:05 version -- app/version.sh@14 -- # tr -d '"' 00:38:18.425 05:29:05 version -- app/version.sh@17 -- # major=25 00:38:18.425 05:29:05 version -- app/version.sh@18 -- # get_header_version minor 00:38:18.425 05:29:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:38:18.425 05:29:05 version -- app/version.sh@14 -- # cut -f2 00:38:18.425 05:29:05 version -- app/version.sh@14 -- # tr -d '"' 00:38:18.425 05:29:05 version -- app/version.sh@18 -- # minor=1 00:38:18.425 05:29:05 version -- app/version.sh@19 -- # get_header_version patch 00:38:18.425 05:29:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:38:18.425 05:29:05 version -- app/version.sh@14 -- # cut -f2 00:38:18.425 05:29:05 version -- app/version.sh@14 -- # tr -d '"' 00:38:18.425 05:29:05 version -- app/version.sh@19 -- # patch=0 00:38:18.425 05:29:05 version -- app/version.sh@20 -- # get_header_version suffix 00:38:18.425 05:29:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:38:18.425 05:29:05 version -- app/version.sh@14 -- # cut -f2 00:38:18.425 05:29:05 version -- app/version.sh@14 -- # tr -d '"' 00:38:18.425 05:29:05 version -- app/version.sh@20 -- # suffix=-pre 00:38:18.425 05:29:05 version -- app/version.sh@22 -- # version=25.1 00:38:18.425 05:29:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:38:18.425 05:29:05 version -- app/version.sh@28 -- # version=25.1rc0 00:38:18.425 05:29:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:18.425 05:29:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:38:18.425 05:29:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:38:18.425 05:29:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:38:18.425 00:38:18.425 real 0m0.234s 00:38:18.425 user 0m0.144s 00:38:18.425 sys 0m0.121s 00:38:18.425 ************************************ 00:38:18.425 END TEST version 00:38:18.425 ************************************ 00:38:18.425 05:29:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.425 05:29:05 version -- common/autotest_common.sh@10 -- # set +x 00:38:18.425 05:29:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:38:18.425 05:29:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:38:18.425 05:29:05 -- spdk/autotest.sh@194 -- # uname -s 00:38:18.683 05:29:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:38:18.683 05:29:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:38:18.683 05:29:05 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:38:18.683 05:29:05 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:38:18.683 05:29:05 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:18.683 05:29:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.683 05:29:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.683 05:29:05 -- common/autotest_common.sh@10 -- # set +x 00:38:18.683 ************************************ 00:38:18.683 START TEST spdk_dd 00:38:18.683 ************************************ 00:38:18.683 05:29:05 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:18.683 * Looking for test storage... 00:38:18.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:18.683 05:29:05 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:18.683 05:29:05 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:38:18.683 05:29:05 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:18.683 05:29:05 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.683 05:29:05 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@345 -- # : 1 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@368 -- # return 0 00:38:18.684 05:29:05 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.684 05:29:05 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:18.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.684 --rc genhtml_branch_coverage=1 00:38:18.684 --rc genhtml_function_coverage=1 00:38:18.684 --rc genhtml_legend=1 00:38:18.684 --rc geninfo_all_blocks=1 00:38:18.684 --rc geninfo_unexecuted_blocks=1 00:38:18.684 00:38:18.684 ' 00:38:18.684 05:29:05 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:18.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.684 --rc genhtml_branch_coverage=1 00:38:18.684 --rc genhtml_function_coverage=1 00:38:18.684 --rc genhtml_legend=1 00:38:18.684 --rc geninfo_all_blocks=1 00:38:18.684 --rc geninfo_unexecuted_blocks=1 00:38:18.684 00:38:18.684 ' 00:38:18.684 05:29:05 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:18.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.684 --rc genhtml_branch_coverage=1 00:38:18.684 --rc genhtml_function_coverage=1 00:38:18.684 --rc genhtml_legend=1 00:38:18.684 --rc geninfo_all_blocks=1 00:38:18.684 --rc geninfo_unexecuted_blocks=1 00:38:18.684 00:38:18.684 ' 00:38:18.684 05:29:05 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:18.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.684 --rc genhtml_branch_coverage=1 00:38:18.684 --rc genhtml_function_coverage=1 00:38:18.684 --rc genhtml_legend=1 00:38:18.684 --rc geninfo_all_blocks=1 00:38:18.684 --rc geninfo_unexecuted_blocks=1 00:38:18.684 00:38:18.684 ' 00:38:18.684 05:29:05 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.684 05:29:05 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.684 05:29:05 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.684 05:29:05 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.684 05:29:05 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.684 05:29:05 spdk_dd -- paths/export.sh@5 -- # export PATH 00:38:18.684 05:29:05 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.684 05:29:05 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:18.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:18.943 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:18.943 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:19.204 05:29:05 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:38:19.204 05:29:05 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:38:19.204 05:29:05 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:38:19.204 05:29:05 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:38:19.204 05:29:05 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:38:19.204 05:29:05 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:38:19.204 05:29:05 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@233 -- # local class 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@235 -- # local progif 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@236 -- # class=01 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@18 -- # local i 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@27 -- # return 0 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@18 -- # local i 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@27 -- # return 0 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:38:19.205 05:29:05 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:38:19.205 05:29:05 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@139 -- # local lib 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.205 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:38:19.206 * spdk_dd linked to liburing 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:19.206 05:29:05 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:38:19.206 05:29:05 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:19.206 05:29:06 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:38:19.206 05:29:06 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:38:19.206 05:29:06 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:38:19.206 05:29:06 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:38:19.206 05:29:06 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:38:19.206 05:29:06 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:38:19.206 05:29:06 spdk_dd -- dd/common.sh@153 -- # return 0 00:38:19.206 05:29:06 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:38:19.206 05:29:06 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:38:19.206 05:29:06 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:19.206 05:29:06 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.206 05:29:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:19.206 ************************************ 00:38:19.206 START TEST spdk_dd_basic_rw 00:38:19.206 ************************************ 00:38:19.206 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:38:19.206 * Looking for test storage... 00:38:19.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:19.206 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:19.206 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:38:19.206 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:19.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.465 --rc genhtml_branch_coverage=1 00:38:19.465 --rc genhtml_function_coverage=1 00:38:19.465 --rc genhtml_legend=1 00:38:19.465 --rc geninfo_all_blocks=1 00:38:19.465 --rc geninfo_unexecuted_blocks=1 00:38:19.465 00:38:19.465 ' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:19.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.465 --rc genhtml_branch_coverage=1 00:38:19.465 --rc genhtml_function_coverage=1 00:38:19.465 --rc genhtml_legend=1 00:38:19.465 --rc geninfo_all_blocks=1 00:38:19.465 --rc geninfo_unexecuted_blocks=1 00:38:19.465 00:38:19.465 ' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:19.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.465 --rc genhtml_branch_coverage=1 00:38:19.465 --rc genhtml_function_coverage=1 00:38:19.465 --rc genhtml_legend=1 00:38:19.465 --rc geninfo_all_blocks=1 00:38:19.465 --rc geninfo_unexecuted_blocks=1 00:38:19.465 00:38:19.465 ' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:19.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.465 --rc genhtml_branch_coverage=1 00:38:19.465 --rc genhtml_function_coverage=1 00:38:19.465 --rc genhtml_legend=1 00:38:19.465 --rc geninfo_all_blocks=1 00:38:19.465 --rc geninfo_unexecuted_blocks=1 00:38:19.465 00:38:19.465 ' 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:38:19.465 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:38:19.726 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:38:19.726 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:19.727 ************************************ 00:38:19.727 START TEST dd_bs_lt_native_bs 00:38:19.727 ************************************ 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:19.727 05:29:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:19.727 [2024-12-09 05:29:06.515633] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:19.727 [2024-12-09 05:29:06.516310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:38:19.727 { 00:38:19.727 "subsystems": [ 00:38:19.727 { 00:38:19.727 "subsystem": "bdev", 00:38:19.727 "config": [ 00:38:19.727 { 00:38:19.727 "params": { 00:38:19.727 "trtype": "pcie", 00:38:19.727 "traddr": "0000:00:10.0", 00:38:19.727 "name": "Nvme0" 00:38:19.727 }, 00:38:19.727 "method": "bdev_nvme_attach_controller" 00:38:19.727 }, 00:38:19.727 { 00:38:19.727 "method": "bdev_wait_for_examine" 00:38:19.727 } 00:38:19.727 ] 00:38:19.727 } 00:38:19.727 ] 00:38:19.727 } 00:38:19.727 [2024-12-09 05:29:06.663638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.985 [2024-12-09 05:29:06.713028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.985 [2024-12-09 05:29:06.752071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:19.985 [2024-12-09 05:29:06.848881] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:38:19.985 [2024-12-09 05:29:06.849194] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:19.985 [2024-12-09 05:29:06.925195] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:20.243 00:38:20.243 real 0m0.572s 00:38:20.243 user 0m0.408s 00:38:20.243 sys 0m0.126s 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:38:20.243 ************************************ 00:38:20.243 END TEST dd_bs_lt_native_bs 00:38:20.243 ************************************ 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:20.243 ************************************ 00:38:20.243 START TEST dd_rw 00:38:20.243 ************************************ 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:20.243 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:20.809 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:38:20.809 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:20.809 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:20.809 05:29:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:21.067 [2024-12-09 05:29:07.791521] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:21.067 [2024-12-09 05:29:07.791620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:38:21.067 { 00:38:21.067 "subsystems": [ 00:38:21.067 { 00:38:21.067 "subsystem": "bdev", 00:38:21.067 "config": [ 00:38:21.067 { 00:38:21.067 "params": { 00:38:21.067 "trtype": "pcie", 00:38:21.067 "traddr": "0000:00:10.0", 00:38:21.067 "name": "Nvme0" 00:38:21.067 }, 00:38:21.067 "method": "bdev_nvme_attach_controller" 00:38:21.067 }, 00:38:21.067 { 00:38:21.067 "method": "bdev_wait_for_examine" 00:38:21.067 } 00:38:21.067 ] 00:38:21.067 } 00:38:21.067 ] 00:38:21.067 } 00:38:21.067 [2024-12-09 05:29:07.949038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.067 [2024-12-09 05:29:07.982736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.067 [2024-12-09 05:29:08.013429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:21.325  [2024-12-09T05:29:08.297Z] Copying: 60/60 [kB] (average 29 MBps) 00:38:21.325 00:38:21.325 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:38:21.325 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:21.325 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:21.325 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:21.583 { 00:38:21.583 "subsystems": [ 00:38:21.583 { 00:38:21.583 "subsystem": "bdev", 00:38:21.583 "config": [ 00:38:21.583 { 00:38:21.583 "params": { 00:38:21.583 "trtype": "pcie", 00:38:21.583 "traddr": "0000:00:10.0", 00:38:21.583 "name": "Nvme0" 00:38:21.583 }, 00:38:21.583 "method": "bdev_nvme_attach_controller" 00:38:21.583 }, 00:38:21.583 { 00:38:21.583 "method": "bdev_wait_for_examine" 00:38:21.583 } 00:38:21.583 ] 00:38:21.583 } 00:38:21.583 ] 00:38:21.583 } 00:38:21.583 [2024-12-09 05:29:08.346476] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:21.583 [2024-12-09 05:29:08.346594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:38:21.583 [2024-12-09 05:29:08.493855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.583 [2024-12-09 05:29:08.527174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.841 [2024-12-09 05:29:08.557394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:21.841  [2024-12-09T05:29:09.071Z] Copying: 60/60 [kB] (average 29 MBps) 00:38:22.099 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:22.099 05:29:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:22.099 { 00:38:22.099 "subsystems": [ 00:38:22.099 { 00:38:22.099 "subsystem": "bdev", 00:38:22.099 "config": [ 00:38:22.099 { 00:38:22.099 "params": { 00:38:22.099 "trtype": "pcie", 00:38:22.099 "traddr": "0000:00:10.0", 00:38:22.099 "name": "Nvme0" 00:38:22.099 }, 00:38:22.099 "method": "bdev_nvme_attach_controller" 00:38:22.099 }, 00:38:22.099 { 00:38:22.099 "method": "bdev_wait_for_examine" 00:38:22.099 } 00:38:22.099 ] 00:38:22.099 } 00:38:22.099 ] 00:38:22.099 } 00:38:22.099 [2024-12-09 05:29:08.894398] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:22.099 [2024-12-09 05:29:08.894670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59617 ] 00:38:22.099 [2024-12-09 05:29:09.042848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.358 [2024-12-09 05:29:09.076227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.358 [2024-12-09 05:29:09.106869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:22.358  [2024-12-09T05:29:09.588Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:22.616 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:22.616 05:29:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:23.184 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:38:23.184 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:23.184 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:23.184 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:23.184 [2024-12-09 05:29:10.077619] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:23.184 [2024-12-09 05:29:10.077939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:38:23.184 { 00:38:23.184 "subsystems": [ 00:38:23.184 { 00:38:23.184 "subsystem": "bdev", 00:38:23.184 "config": [ 00:38:23.184 { 00:38:23.184 "params": { 00:38:23.184 "trtype": "pcie", 00:38:23.184 "traddr": "0000:00:10.0", 00:38:23.184 "name": "Nvme0" 00:38:23.184 }, 00:38:23.184 "method": "bdev_nvme_attach_controller" 00:38:23.184 }, 00:38:23.184 { 00:38:23.184 "method": "bdev_wait_for_examine" 00:38:23.184 } 00:38:23.184 ] 00:38:23.184 } 00:38:23.184 ] 00:38:23.184 } 00:38:23.442 [2024-12-09 05:29:10.230235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.442 [2024-12-09 05:29:10.263808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.442 [2024-12-09 05:29:10.294318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:23.442  [2024-12-09T05:29:10.672Z] Copying: 60/60 [kB] (average 58 MBps) 00:38:23.700 00:38:23.700 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:38:23.701 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:23.701 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:23.701 05:29:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:23.701 { 00:38:23.701 "subsystems": [ 00:38:23.701 { 00:38:23.701 "subsystem": "bdev", 00:38:23.701 "config": [ 00:38:23.701 { 00:38:23.701 "params": { 00:38:23.701 "trtype": "pcie", 00:38:23.701 "traddr": "0000:00:10.0", 00:38:23.701 "name": "Nvme0" 00:38:23.701 }, 00:38:23.701 "method": "bdev_nvme_attach_controller" 00:38:23.701 }, 00:38:23.701 { 00:38:23.701 "method": "bdev_wait_for_examine" 00:38:23.701 } 00:38:23.701 ] 00:38:23.701 } 00:38:23.701 ] 00:38:23.701 } 00:38:23.701 [2024-12-09 05:29:10.627907] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:23.701 [2024-12-09 05:29:10.628004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:38:23.959 [2024-12-09 05:29:10.779833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.959 [2024-12-09 05:29:10.813060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.959 [2024-12-09 05:29:10.843386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:24.218  [2024-12-09T05:29:11.190Z] Copying: 60/60 [kB] (average 58 MBps) 00:38:24.218 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:24.218 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:24.218 { 00:38:24.218 "subsystems": [ 00:38:24.218 { 00:38:24.218 "subsystem": "bdev", 00:38:24.218 "config": [ 00:38:24.218 { 00:38:24.218 "params": { 00:38:24.218 "trtype": "pcie", 00:38:24.218 "traddr": "0000:00:10.0", 00:38:24.218 "name": "Nvme0" 00:38:24.218 }, 00:38:24.218 "method": "bdev_nvme_attach_controller" 00:38:24.218 }, 00:38:24.218 { 00:38:24.218 "method": "bdev_wait_for_examine" 00:38:24.218 } 00:38:24.218 ] 00:38:24.218 } 00:38:24.218 ] 00:38:24.218 } 00:38:24.218 [2024-12-09 05:29:11.181648] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:24.218 [2024-12-09 05:29:11.181750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:38:24.477 [2024-12-09 05:29:11.332884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.477 [2024-12-09 05:29:11.366182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.477 [2024-12-09 05:29:11.396316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:24.736  [2024-12-09T05:29:11.708Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:24.736 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:24.736 05:29:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:25.306 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:38:25.306 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:25.306 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:25.306 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:25.565 [2024-12-09 05:29:12.330046] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:25.565 [2024-12-09 05:29:12.330149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:38:25.565 { 00:38:25.565 "subsystems": [ 00:38:25.565 { 00:38:25.565 "subsystem": "bdev", 00:38:25.565 "config": [ 00:38:25.565 { 00:38:25.565 "params": { 00:38:25.565 "trtype": "pcie", 00:38:25.565 "traddr": "0000:00:10.0", 00:38:25.565 "name": "Nvme0" 00:38:25.565 }, 00:38:25.565 "method": "bdev_nvme_attach_controller" 00:38:25.565 }, 00:38:25.565 { 00:38:25.565 "method": "bdev_wait_for_examine" 00:38:25.565 } 00:38:25.565 ] 00:38:25.565 } 00:38:25.565 ] 00:38:25.565 } 00:38:25.565 [2024-12-09 05:29:12.488063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.565 [2024-12-09 05:29:12.526471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.824 [2024-12-09 05:29:12.558969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:25.824  [2024-12-09T05:29:13.057Z] Copying: 56/56 [kB] (average 27 MBps) 00:38:26.085 00:38:26.085 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:38:26.085 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:26.085 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:26.085 05:29:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:26.085 { 00:38:26.085 "subsystems": [ 00:38:26.085 { 00:38:26.085 "subsystem": "bdev", 00:38:26.085 "config": [ 00:38:26.085 { 00:38:26.085 "params": { 00:38:26.085 "trtype": "pcie", 00:38:26.085 "traddr": "0000:00:10.0", 00:38:26.085 "name": "Nvme0" 00:38:26.085 }, 00:38:26.085 "method": "bdev_nvme_attach_controller" 00:38:26.085 }, 00:38:26.085 { 00:38:26.085 "method": "bdev_wait_for_examine" 00:38:26.085 } 00:38:26.085 ] 00:38:26.085 } 00:38:26.085 ] 00:38:26.085 } 00:38:26.085 [2024-12-09 05:29:12.895269] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:26.085 [2024-12-09 05:29:12.895377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59703 ] 00:38:26.085 [2024-12-09 05:29:13.043237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.348 [2024-12-09 05:29:13.077329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.348 [2024-12-09 05:29:13.107084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:26.348  [2024-12-09T05:29:13.578Z] Copying: 56/56 [kB] (average 54 MBps) 00:38:26.606 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:26.606 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:26.606 [2024-12-09 05:29:13.429711] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:26.606 [2024-12-09 05:29:13.429803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59717 ] 00:38:26.606 { 00:38:26.606 "subsystems": [ 00:38:26.606 { 00:38:26.606 "subsystem": "bdev", 00:38:26.606 "config": [ 00:38:26.606 { 00:38:26.606 "params": { 00:38:26.606 "trtype": "pcie", 00:38:26.606 "traddr": "0000:00:10.0", 00:38:26.606 "name": "Nvme0" 00:38:26.606 }, 00:38:26.606 "method": "bdev_nvme_attach_controller" 00:38:26.606 }, 00:38:26.606 { 00:38:26.606 "method": "bdev_wait_for_examine" 00:38:26.606 } 00:38:26.606 ] 00:38:26.606 } 00:38:26.606 ] 00:38:26.606 } 00:38:26.864 [2024-12-09 05:29:13.574742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.864 [2024-12-09 05:29:13.607542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.864 [2024-12-09 05:29:13.637115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:26.864  [2024-12-09T05:29:14.094Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:27.122 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:27.122 05:29:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:27.687 05:29:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:38:27.687 05:29:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:27.687 05:29:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:27.687 05:29:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:27.688 [2024-12-09 05:29:14.518664] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:27.688 [2024-12-09 05:29:14.518755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:38:27.688 { 00:38:27.688 "subsystems": [ 00:38:27.688 { 00:38:27.688 "subsystem": "bdev", 00:38:27.688 "config": [ 00:38:27.688 { 00:38:27.688 "params": { 00:38:27.688 "trtype": "pcie", 00:38:27.688 "traddr": "0000:00:10.0", 00:38:27.688 "name": "Nvme0" 00:38:27.688 }, 00:38:27.688 "method": "bdev_nvme_attach_controller" 00:38:27.688 }, 00:38:27.688 { 00:38:27.688 "method": "bdev_wait_for_examine" 00:38:27.688 } 00:38:27.688 ] 00:38:27.688 } 00:38:27.688 ] 00:38:27.688 } 00:38:27.945 [2024-12-09 05:29:14.666128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.945 [2024-12-09 05:29:14.701191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.945 [2024-12-09 05:29:14.730856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:27.945  [2024-12-09T05:29:15.176Z] Copying: 56/56 [kB] (average 54 MBps) 00:38:28.204 00:38:28.204 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:38:28.204 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:28.204 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:28.204 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:28.204 { 00:38:28.204 "subsystems": [ 00:38:28.204 { 00:38:28.204 "subsystem": "bdev", 00:38:28.204 "config": [ 00:38:28.204 { 00:38:28.204 "params": { 00:38:28.204 "trtype": "pcie", 00:38:28.204 "traddr": "0000:00:10.0", 00:38:28.204 "name": "Nvme0" 00:38:28.204 }, 00:38:28.204 "method": "bdev_nvme_attach_controller" 00:38:28.204 }, 00:38:28.204 { 00:38:28.204 "method": "bdev_wait_for_examine" 00:38:28.204 } 00:38:28.204 ] 00:38:28.204 } 00:38:28.204 ] 00:38:28.204 } 00:38:28.204 [2024-12-09 05:29:15.068769] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:28.204 [2024-12-09 05:29:15.069150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:38:28.462 [2024-12-09 05:29:15.225993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.462 [2024-12-09 05:29:15.259238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.462 [2024-12-09 05:29:15.288966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:28.462  [2024-12-09T05:29:15.748Z] Copying: 56/56 [kB] (average 54 MBps) 00:38:28.776 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:28.776 05:29:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:28.776 [2024-12-09 05:29:15.621679] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:28.776 [2024-12-09 05:29:15.621786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:38:28.776 { 00:38:28.776 "subsystems": [ 00:38:28.776 { 00:38:28.776 "subsystem": "bdev", 00:38:28.776 "config": [ 00:38:28.776 { 00:38:28.776 "params": { 00:38:28.776 "trtype": "pcie", 00:38:28.776 "traddr": "0000:00:10.0", 00:38:28.776 "name": "Nvme0" 00:38:28.776 }, 00:38:28.776 "method": "bdev_nvme_attach_controller" 00:38:28.776 }, 00:38:28.776 { 00:38:28.776 "method": "bdev_wait_for_examine" 00:38:28.776 } 00:38:28.776 ] 00:38:28.776 } 00:38:28.776 ] 00:38:28.776 } 00:38:29.033 [2024-12-09 05:29:15.771620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.033 [2024-12-09 05:29:15.804596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.033 [2024-12-09 05:29:15.833931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:29.033  [2024-12-09T05:29:16.263Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:29.291 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:29.291 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:29.856 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:38:29.856 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:29.856 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:29.856 05:29:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:29.856 [2024-12-09 05:29:16.673983] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:29.856 [2024-12-09 05:29:16.674285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:38:29.856 { 00:38:29.856 "subsystems": [ 00:38:29.856 { 00:38:29.856 "subsystem": "bdev", 00:38:29.856 "config": [ 00:38:29.856 { 00:38:29.856 "params": { 00:38:29.856 "trtype": "pcie", 00:38:29.856 "traddr": "0000:00:10.0", 00:38:29.856 "name": "Nvme0" 00:38:29.856 }, 00:38:29.856 "method": "bdev_nvme_attach_controller" 00:38:29.856 }, 00:38:29.856 { 00:38:29.856 "method": "bdev_wait_for_examine" 00:38:29.856 } 00:38:29.856 ] 00:38:29.856 } 00:38:29.856 ] 00:38:29.856 } 00:38:29.856 [2024-12-09 05:29:16.821904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.115 [2024-12-09 05:29:16.854539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.115 [2024-12-09 05:29:16.884185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:30.115  [2024-12-09T05:29:17.345Z] Copying: 48/48 [kB] (average 46 MBps) 00:38:30.373 00:38:30.373 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:38:30.373 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:30.373 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:30.373 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:30.373 { 00:38:30.373 "subsystems": [ 00:38:30.373 { 00:38:30.373 "subsystem": "bdev", 00:38:30.373 "config": [ 00:38:30.373 { 00:38:30.373 "params": { 00:38:30.373 "trtype": "pcie", 00:38:30.373 "traddr": "0000:00:10.0", 00:38:30.373 "name": "Nvme0" 00:38:30.373 }, 00:38:30.373 "method": "bdev_nvme_attach_controller" 00:38:30.373 }, 00:38:30.373 { 00:38:30.373 "method": "bdev_wait_for_examine" 00:38:30.373 } 00:38:30.373 ] 00:38:30.373 } 00:38:30.373 ] 00:38:30.373 } 00:38:30.373 [2024-12-09 05:29:17.222362] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:30.373 [2024-12-09 05:29:17.222500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:38:30.631 [2024-12-09 05:29:17.378455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.631 [2024-12-09 05:29:17.411305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.631 [2024-12-09 05:29:17.440727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:30.631  [2024-12-09T05:29:17.863Z] Copying: 48/48 [kB] (average 46 MBps) 00:38:30.891 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:30.891 05:29:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:30.891 { 00:38:30.891 "subsystems": [ 00:38:30.891 { 00:38:30.891 "subsystem": "bdev", 00:38:30.891 "config": [ 00:38:30.891 { 00:38:30.891 "params": { 00:38:30.891 "trtype": "pcie", 00:38:30.891 "traddr": "0000:00:10.0", 00:38:30.891 "name": "Nvme0" 00:38:30.891 }, 00:38:30.891 "method": "bdev_nvme_attach_controller" 00:38:30.891 }, 00:38:30.891 { 00:38:30.891 "method": "bdev_wait_for_examine" 00:38:30.891 } 00:38:30.891 ] 00:38:30.891 } 00:38:30.891 ] 00:38:30.891 } 00:38:30.891 [2024-12-09 05:29:17.786386] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:30.891 [2024-12-09 05:29:17.787155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:38:31.149 [2024-12-09 05:29:17.943775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.149 [2024-12-09 05:29:17.976975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.149 [2024-12-09 05:29:18.006856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:31.149  [2024-12-09T05:29:18.379Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:31.407 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:31.407 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:31.973 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:38:31.973 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:31.973 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:31.973 05:29:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:31.973 { 00:38:31.973 "subsystems": [ 00:38:31.973 { 00:38:31.973 "subsystem": "bdev", 00:38:31.973 "config": [ 00:38:31.973 { 00:38:31.973 "params": { 00:38:31.973 "trtype": "pcie", 00:38:31.973 "traddr": "0000:00:10.0", 00:38:31.973 "name": "Nvme0" 00:38:31.973 }, 00:38:31.973 "method": "bdev_nvme_attach_controller" 00:38:31.973 }, 00:38:31.973 { 00:38:31.973 "method": "bdev_wait_for_examine" 00:38:31.973 } 00:38:31.973 ] 00:38:31.973 } 00:38:31.973 ] 00:38:31.973 } 00:38:31.973 [2024-12-09 05:29:18.871007] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:31.973 [2024-12-09 05:29:18.871377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:38:32.231 [2024-12-09 05:29:19.028749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.231 [2024-12-09 05:29:19.061829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.231 [2024-12-09 05:29:19.091342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:32.231  [2024-12-09T05:29:19.461Z] Copying: 48/48 [kB] (average 46 MBps) 00:38:32.489 00:38:32.489 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:38:32.489 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:32.489 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:32.489 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:32.489 { 00:38:32.489 "subsystems": [ 00:38:32.489 { 00:38:32.489 "subsystem": "bdev", 00:38:32.489 "config": [ 00:38:32.489 { 00:38:32.489 "params": { 00:38:32.489 "trtype": "pcie", 00:38:32.489 "traddr": "0000:00:10.0", 00:38:32.489 "name": "Nvme0" 00:38:32.489 }, 00:38:32.489 "method": "bdev_nvme_attach_controller" 00:38:32.489 }, 00:38:32.489 { 00:38:32.489 "method": "bdev_wait_for_examine" 00:38:32.489 } 00:38:32.489 ] 00:38:32.489 } 00:38:32.489 ] 00:38:32.489 } 00:38:32.489 [2024-12-09 05:29:19.435118] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:32.489 [2024-12-09 05:29:19.435240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59847 ] 00:38:32.747 [2024-12-09 05:29:19.596245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.747 [2024-12-09 05:29:19.635539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.747 [2024-12-09 05:29:19.669461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:33.005  [2024-12-09T05:29:19.977Z] Copying: 48/48 [kB] (average 46 MBps) 00:38:33.005 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:33.005 05:29:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:33.262 [2024-12-09 05:29:20.003823] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:33.262 [2024-12-09 05:29:20.004103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:38:33.262 { 00:38:33.262 "subsystems": [ 00:38:33.262 { 00:38:33.262 "subsystem": "bdev", 00:38:33.262 "config": [ 00:38:33.262 { 00:38:33.262 "params": { 00:38:33.262 "trtype": "pcie", 00:38:33.262 "traddr": "0000:00:10.0", 00:38:33.262 "name": "Nvme0" 00:38:33.262 }, 00:38:33.262 "method": "bdev_nvme_attach_controller" 00:38:33.262 }, 00:38:33.262 { 00:38:33.262 "method": "bdev_wait_for_examine" 00:38:33.262 } 00:38:33.262 ] 00:38:33.262 } 00:38:33.262 ] 00:38:33.262 } 00:38:33.262 [2024-12-09 05:29:20.150697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.262 [2024-12-09 05:29:20.184715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.262 [2024-12-09 05:29:20.215437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:33.520  [2024-12-09T05:29:20.492Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:33.520 00:38:33.778 ************************************ 00:38:33.778 END TEST dd_rw 00:38:33.778 ************************************ 00:38:33.778 00:38:33.778 real 0m13.410s 00:38:33.778 user 0m10.334s 00:38:33.778 sys 0m3.712s 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:33.778 ************************************ 00:38:33.778 START TEST dd_rw_offset 00:38:33.778 ************************************ 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:33.778 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:38:33.779 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=xhugffd4n8d8y0gt93n42l7gghyrdo8hg8hz1v6yve4jt2m04u7sh2mlhv1sr0ximthfz64vulgtkfdmxrf0cqzr0qibzy0exy9prai89ykgrrfud34p12m9c4v96gxc4bapbxxxmow2jobx8gfqz9y5ru370u46ra7fkz0tghosegthlnst7n0ipcdjwj8zs23qcfdvtc46j5gu30o7wbpjakilxlzzeey84hakvh8db3swek7b1i404myx0qckbzeudcn7ve6acpuco9skma2gjdifhysd7vgjohp0w7p12twecwhvb4c6wlr385ylakif3fppclscl20bpqb33suhtvdjkxp2fz67ro8mol2pil0pcit6ox20iuxtijrvbd2579ovmkghya8i640zmg8cz9hwdyr6in9hpfx34oiljcj3h6881tz0tisd1u52v3gv3ijdz4fp1njohvmzahtloqnrrrdhcly82yqvvldhr8ripx0eny8uto59ygk153xi0shptcaqhtidcjgwwwc5kczyoljbbfrnlj9dhhnjff7y9mwb3khijtwzvaaz2595dul8f0pxwruaa4kmykrsl56r7ex9rbs1dojrbpbv0739epolr7dyxtrlq925bud4kfk3m2aoq2u2115faniv61yx4f6upui0e4kd06m8zs1xen52d076c0kesa484aaxhkumikwmohllnwvyfd9b9x651plasdmxxeav2vtfbfxj3jd5cb7dcgpdp4v8qi09y8ouh99a9q4f86dct9d6viln58lgf4wrxriurfu19nn2kqbq9nt78c5po868tw5w58bvulo8zjjbae9gh8quz0ykhya4t16p5iavfl3buk52o5gei9lpddl415c6cuxtkv4grtpurjhuib29nqs5sq8g6cd0nnom736kdflzrnpvkqhduvm42u8x7ff1bkoozcvkltxaujioc1546ifpw228z8ds4y262ogergt39797go1zgdczazgta8gzjxfbur6asnp4itrf4ykcl0xoaxzld2ibfh8rqixaw7riyl2lq33h785ncroouomnhum7ciojcnyh2zpuhzys55j8kss79bdy59yf5i4x1h0z7th1fhfh13uwgs2prp6ezi0vz48ffui7nh6093j2kahao2wppc7beubndnoonoc50l6qvacpgmrrr0jcpvdgg9aiobcotvh936zb1hur05uvzcq8lbbl3r14s3zabeycd06vsdgpoagthz5emlzyy83aaghklt1tpbp4a2oe8g10j72al5kepget0yot0d84ddhl6swiw5vrwmd3wee5f18vm5qw3sfr61gyovqom7ku0xbbnlh1vi3lviwj8izj0dtz4jn4bapt19lre1okmom3ti3ct35ft1edm3mn8it719r906my11yqm8l2xuynevucldz28omu0c2x6jm4q46l24t1k7lta4ea3sv1rj2xmkzguet5qmie9od0hthvg905x85516z5krtcokfq4uf3djnuqf2pafluko1zvd0mmh4j01nwsd9zjhxrm1d0h296nnvr6k77fd3gxoeretv6txkic0xxf10qd27aa1woorb2xfbj6chuykg1poawe50f65hrtmch1njoydg4ooc337oukg74czgf630tiihr28pqfm6opp902wt17bu194l2o51ndpzf2i3zunwgxmprpn73h4mk5rss35h3sy5tjmarugvcnsd3vg6nou992n0tsvtj32pf16vyxicoy597ueqx2u22ccwva47vokru30vhqdj1x3g5xvty41uhicv8dirvepyh3m62do5gbot8yb173tjiy095det43s91r4yup6qr13yp7l7glg8armkoi388esz6d1ghl2upbbf8mtgxjrg0w543e75tchifyjneo4yb27dv887lrki3zgv6d8n5t9py2tumal2pl2b6sh0k6sjz9z6bxx52b2k96c7tlciphxth4aflsg249e8jn1z5qf47t0poosjy861rmqij8mnq66xbx8ixl7ho7v0mbjt5ac9oqiqk433l38r1gpiqr6chst1nfvdvh1xln5sn0gamvabu2jhemy3q0balccccqi8dao5m93azxsaql55u4dytkxid5azkxdz2brqz1sgm72db4vg7v9w78ihf7o3qsr4zcyku9s116og94v6rmraemrikgglcvdbmlfldt6aisgv87gsq1x8xsrph9n81nlb23827f1hrbkud3zvdm7op54bbfjia5p4j8p362xnqzzggqyzye3reu0zkwr39hz464nebriksdbjuly1jtgssi0f79lcqgt8kix6emohtuqsffc2yavu9k7gy8u1o6mw43vwoxtew45w5ayd80fknnyev2naezcyqno3hsz72uqj6orvk8gi5gu1agovkwby9xahipn7w2gkvfa47zzpf3vclxgsxs46995lunks4rf2esrt5ksft5cfhv4yvlngx75j1szjb7be7jqqr0aec3ecg49uv59om6mb476f0gi31tc5375aqn76y9se31h709flm7k11d60g54m5st7x77lmtgdmjazifg7vqindviksso3nj6zcflitb7quxv3rwp0k9gwaz7906hc37pejnmokofhekwbl49cng4k2qjul85az9k12x92kx5fotgs8tkdw7j3npqst9rd407tc9bnjp8q9jcjl14qr6nmjb72n177thnfy0604g8ga0up2w79ayfp4jsnl8aozt2433dwe3zllnunj7rey6t8w5ulklx1km7j23g7fjmgnakf3z429tqgl1te2g82pdr2i7k6e1v6kphhrftcx1kqc5fqe68f70fvg194a7fhx1jwq2viiturfocaf3kdvtjf9r6uqr376ati356muj5hs8lpbdxokq03oavr7b7utplut90wytcdo8669rp7nwj4ssi7n6vfpiqrxp7ymyjij2qe4iek6xi7aqeynd7fa1qcgfmva2hx5z1v093qp6xjosl5x5bvj9oygfgiqnscbcqpandu0yej1j3old0tsvns6itw5k3lwi1vnbxy5n7n5262q0ahkxj04nvenn496sndjchp4jvcbswe87nnsh52dns9vg2do6kn0bjwgx9jkim6bavrmnvtp9p4ihtdj1nr3ccs6zn0irxfd7cii9mol2lywcyj56q6wg8cg6sukv4pa8z0n10trlf0qhes58tv1dwd1s1azn4k36vf0ux246rw6mlcb4wf02weg0pfk96fzsrvgbya1zxxzcoswjdyfakknu6y4b8ttr9egxg519oln5e28g6boofa201gxbt1ougdmil8rlss6dbb2g1ltrwswvpvt0t0lhdq6tdjn9ow4qw9jx15i9jx04fpm9718kyps1jq1wv0vzb8iezok88xjf467wl35ufov39msg8faqbikiivk3w2shky55n4xd0yllcaot73mq8w89v32ce5gmchn4gi2gaon9byhp84rb2s3md1f6rtvpl882g0semeb7dpt1re6zrn3a2p31wx0qpb4isvwcgq5eqaqzrxgtfv202atf7nziocneoerxskq3svuj82u0x4ywxpn9o4xxv8by7g8nago5chkv59pta8ns2kq2cnoegtrqpnk94aqoq424u0s5q99pec72b14yzjos20t9e3jzl38hsbpzd7v1bm7rb294dwtxirhrfyeybwor2oqa0v9cwoanelvxj6rjd3c6gc1pvbkyf8msoa5vrbd8gwuxc84cu3288kqzonv8rglw8mvs7iepdpg60xnrsnhywduz4hwlqskckfencvgpoycw2460c7a91zhegvdg5pkht2lbw2k616cw6nopx2b0rsa8mj2a77ivftwt390v48xrj6t69bdjpxelfcynt6i51tof119hx92wphwff8nbjdeo2fz95ka8i6b2jookfid36assuknpeouww0gai1fz14fgqr3h11vfuiovuic09enijeo0qddldpcgvaaek0sph8l1lx9ee1k 00:38:33.779 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:38:33.779 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:38:33.779 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:38:33.779 05:29:20 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:33.779 [2024-12-09 05:29:20.638408] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:33.779 [2024-12-09 05:29:20.638769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59893 ] 00:38:33.779 { 00:38:33.779 "subsystems": [ 00:38:33.779 { 00:38:33.779 "subsystem": "bdev", 00:38:33.779 "config": [ 00:38:33.779 { 00:38:33.779 "params": { 00:38:33.779 "trtype": "pcie", 00:38:33.779 "traddr": "0000:00:10.0", 00:38:33.779 "name": "Nvme0" 00:38:33.779 }, 00:38:33.779 "method": "bdev_nvme_attach_controller" 00:38:33.779 }, 00:38:33.779 { 00:38:33.779 "method": "bdev_wait_for_examine" 00:38:33.779 } 00:38:33.779 ] 00:38:33.779 } 00:38:33.779 ] 00:38:33.779 } 00:38:34.037 [2024-12-09 05:29:20.796077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.037 [2024-12-09 05:29:20.835601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.037 [2024-12-09 05:29:20.869948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:34.037  [2024-12-09T05:29:21.269Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:38:34.297 00:38:34.297 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:38:34.297 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:38:34.297 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:38:34.297 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:34.297 [2024-12-09 05:29:21.204469] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:34.297 [2024-12-09 05:29:21.204590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59912 ] 00:38:34.297 { 00:38:34.297 "subsystems": [ 00:38:34.297 { 00:38:34.297 "subsystem": "bdev", 00:38:34.297 "config": [ 00:38:34.297 { 00:38:34.297 "params": { 00:38:34.297 "trtype": "pcie", 00:38:34.297 "traddr": "0000:00:10.0", 00:38:34.297 "name": "Nvme0" 00:38:34.297 }, 00:38:34.297 "method": "bdev_nvme_attach_controller" 00:38:34.297 }, 00:38:34.297 { 00:38:34.297 "method": "bdev_wait_for_examine" 00:38:34.297 } 00:38:34.297 ] 00:38:34.297 } 00:38:34.297 ] 00:38:34.297 } 00:38:34.555 [2024-12-09 05:29:21.352669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.555 [2024-12-09 05:29:21.386827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.555 [2024-12-09 05:29:21.419402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:34.555  [2024-12-09T05:29:21.785Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:38:34.813 00:38:34.813 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ xhugffd4n8d8y0gt93n42l7gghyrdo8hg8hz1v6yve4jt2m04u7sh2mlhv1sr0ximthfz64vulgtkfdmxrf0cqzr0qibzy0exy9prai89ykgrrfud34p12m9c4v96gxc4bapbxxxmow2jobx8gfqz9y5ru370u46ra7fkz0tghosegthlnst7n0ipcdjwj8zs23qcfdvtc46j5gu30o7wbpjakilxlzzeey84hakvh8db3swek7b1i404myx0qckbzeudcn7ve6acpuco9skma2gjdifhysd7vgjohp0w7p12twecwhvb4c6wlr385ylakif3fppclscl20bpqb33suhtvdjkxp2fz67ro8mol2pil0pcit6ox20iuxtijrvbd2579ovmkghya8i640zmg8cz9hwdyr6in9hpfx34oiljcj3h6881tz0tisd1u52v3gv3ijdz4fp1njohvmzahtloqnrrrdhcly82yqvvldhr8ripx0eny8uto59ygk153xi0shptcaqhtidcjgwwwc5kczyoljbbfrnlj9dhhnjff7y9mwb3khijtwzvaaz2595dul8f0pxwruaa4kmykrsl56r7ex9rbs1dojrbpbv0739epolr7dyxtrlq925bud4kfk3m2aoq2u2115faniv61yx4f6upui0e4kd06m8zs1xen52d076c0kesa484aaxhkumikwmohllnwvyfd9b9x651plasdmxxeav2vtfbfxj3jd5cb7dcgpdp4v8qi09y8ouh99a9q4f86dct9d6viln58lgf4wrxriurfu19nn2kqbq9nt78c5po868tw5w58bvulo8zjjbae9gh8quz0ykhya4t16p5iavfl3buk52o5gei9lpddl415c6cuxtkv4grtpurjhuib29nqs5sq8g6cd0nnom736kdflzrnpvkqhduvm42u8x7ff1bkoozcvkltxaujioc1546ifpw228z8ds4y262ogergt39797go1zgdczazgta8gzjxfbur6asnp4itrf4ykcl0xoaxzld2ibfh8rqixaw7riyl2lq33h785ncroouomnhum7ciojcnyh2zpuhzys55j8kss79bdy59yf5i4x1h0z7th1fhfh13uwgs2prp6ezi0vz48ffui7nh6093j2kahao2wppc7beubndnoonoc50l6qvacpgmrrr0jcpvdgg9aiobcotvh936zb1hur05uvzcq8lbbl3r14s3zabeycd06vsdgpoagthz5emlzyy83aaghklt1tpbp4a2oe8g10j72al5kepget0yot0d84ddhl6swiw5vrwmd3wee5f18vm5qw3sfr61gyovqom7ku0xbbnlh1vi3lviwj8izj0dtz4jn4bapt19lre1okmom3ti3ct35ft1edm3mn8it719r906my11yqm8l2xuynevucldz28omu0c2x6jm4q46l24t1k7lta4ea3sv1rj2xmkzguet5qmie9od0hthvg905x85516z5krtcokfq4uf3djnuqf2pafluko1zvd0mmh4j01nwsd9zjhxrm1d0h296nnvr6k77fd3gxoeretv6txkic0xxf10qd27aa1woorb2xfbj6chuykg1poawe50f65hrtmch1njoydg4ooc337oukg74czgf630tiihr28pqfm6opp902wt17bu194l2o51ndpzf2i3zunwgxmprpn73h4mk5rss35h3sy5tjmarugvcnsd3vg6nou992n0tsvtj32pf16vyxicoy597ueqx2u22ccwva47vokru30vhqdj1x3g5xvty41uhicv8dirvepyh3m62do5gbot8yb173tjiy095det43s91r4yup6qr13yp7l7glg8armkoi388esz6d1ghl2upbbf8mtgxjrg0w543e75tchifyjneo4yb27dv887lrki3zgv6d8n5t9py2tumal2pl2b6sh0k6sjz9z6bxx52b2k96c7tlciphxth4aflsg249e8jn1z5qf47t0poosjy861rmqij8mnq66xbx8ixl7ho7v0mbjt5ac9oqiqk433l38r1gpiqr6chst1nfvdvh1xln5sn0gamvabu2jhemy3q0balccccqi8dao5m93azxsaql55u4dytkxid5azkxdz2brqz1sgm72db4vg7v9w78ihf7o3qsr4zcyku9s116og94v6rmraemrikgglcvdbmlfldt6aisgv87gsq1x8xsrph9n81nlb23827f1hrbkud3zvdm7op54bbfjia5p4j8p362xnqzzggqyzye3reu0zkwr39hz464nebriksdbjuly1jtgssi0f79lcqgt8kix6emohtuqsffc2yavu9k7gy8u1o6mw43vwoxtew45w5ayd80fknnyev2naezcyqno3hsz72uqj6orvk8gi5gu1agovkwby9xahipn7w2gkvfa47zzpf3vclxgsxs46995lunks4rf2esrt5ksft5cfhv4yvlngx75j1szjb7be7jqqr0aec3ecg49uv59om6mb476f0gi31tc5375aqn76y9se31h709flm7k11d60g54m5st7x77lmtgdmjazifg7vqindviksso3nj6zcflitb7quxv3rwp0k9gwaz7906hc37pejnmokofhekwbl49cng4k2qjul85az9k12x92kx5fotgs8tkdw7j3npqst9rd407tc9bnjp8q9jcjl14qr6nmjb72n177thnfy0604g8ga0up2w79ayfp4jsnl8aozt2433dwe3zllnunj7rey6t8w5ulklx1km7j23g7fjmgnakf3z429tqgl1te2g82pdr2i7k6e1v6kphhrftcx1kqc5fqe68f70fvg194a7fhx1jwq2viiturfocaf3kdvtjf9r6uqr376ati356muj5hs8lpbdxokq03oavr7b7utplut90wytcdo8669rp7nwj4ssi7n6vfpiqrxp7ymyjij2qe4iek6xi7aqeynd7fa1qcgfmva2hx5z1v093qp6xjosl5x5bvj9oygfgiqnscbcqpandu0yej1j3old0tsvns6itw5k3lwi1vnbxy5n7n5262q0ahkxj04nvenn496sndjchp4jvcbswe87nnsh52dns9vg2do6kn0bjwgx9jkim6bavrmnvtp9p4ihtdj1nr3ccs6zn0irxfd7cii9mol2lywcyj56q6wg8cg6sukv4pa8z0n10trlf0qhes58tv1dwd1s1azn4k36vf0ux246rw6mlcb4wf02weg0pfk96fzsrvgbya1zxxzcoswjdyfakknu6y4b8ttr9egxg519oln5e28g6boofa201gxbt1ougdmil8rlss6dbb2g1ltrwswvpvt0t0lhdq6tdjn9ow4qw9jx15i9jx04fpm9718kyps1jq1wv0vzb8iezok88xjf467wl35ufov39msg8faqbikiivk3w2shky55n4xd0yllcaot73mq8w89v32ce5gmchn4gi2gaon9byhp84rb2s3md1f6rtvpl882g0semeb7dpt1re6zrn3a2p31wx0qpb4isvwcgq5eqaqzrxgtfv202atf7nziocneoerxskq3svuj82u0x4ywxpn9o4xxv8by7g8nago5chkv59pta8ns2kq2cnoegtrqpnk94aqoq424u0s5q99pec72b14yzjos20t9e3jzl38hsbpzd7v1bm7rb294dwtxirhrfyeybwor2oqa0v9cwoanelvxj6rjd3c6gc1pvbkyf8msoa5vrbd8gwuxc84cu3288kqzonv8rglw8mvs7iepdpg60xnrsnhywduz4hwlqskckfencvgpoycw2460c7a91zhegvdg5pkht2lbw2k616cw6nopx2b0rsa8mj2a77ivftwt390v48xrj6t69bdjpxelfcynt6i51tof119hx92wphwff8nbjdeo2fz95ka8i6b2jookfid36assuknpeouww0gai1fz14fgqr3h11vfuiovuic09enijeo0qddldpcgvaaek0sph8l1lx9ee1k == \x\h\u\g\f\f\d\4\n\8\d\8\y\0\g\t\9\3\n\4\2\l\7\g\g\h\y\r\d\o\8\h\g\8\h\z\1\v\6\y\v\e\4\j\t\2\m\0\4\u\7\s\h\2\m\l\h\v\1\s\r\0\x\i\m\t\h\f\z\6\4\v\u\l\g\t\k\f\d\m\x\r\f\0\c\q\z\r\0\q\i\b\z\y\0\e\x\y\9\p\r\a\i\8\9\y\k\g\r\r\f\u\d\3\4\p\1\2\m\9\c\4\v\9\6\g\x\c\4\b\a\p\b\x\x\x\m\o\w\2\j\o\b\x\8\g\f\q\z\9\y\5\r\u\3\7\0\u\4\6\r\a\7\f\k\z\0\t\g\h\o\s\e\g\t\h\l\n\s\t\7\n\0\i\p\c\d\j\w\j\8\z\s\2\3\q\c\f\d\v\t\c\4\6\j\5\g\u\3\0\o\7\w\b\p\j\a\k\i\l\x\l\z\z\e\e\y\8\4\h\a\k\v\h\8\d\b\3\s\w\e\k\7\b\1\i\4\0\4\m\y\x\0\q\c\k\b\z\e\u\d\c\n\7\v\e\6\a\c\p\u\c\o\9\s\k\m\a\2\g\j\d\i\f\h\y\s\d\7\v\g\j\o\h\p\0\w\7\p\1\2\t\w\e\c\w\h\v\b\4\c\6\w\l\r\3\8\5\y\l\a\k\i\f\3\f\p\p\c\l\s\c\l\2\0\b\p\q\b\3\3\s\u\h\t\v\d\j\k\x\p\2\f\z\6\7\r\o\8\m\o\l\2\p\i\l\0\p\c\i\t\6\o\x\2\0\i\u\x\t\i\j\r\v\b\d\2\5\7\9\o\v\m\k\g\h\y\a\8\i\6\4\0\z\m\g\8\c\z\9\h\w\d\y\r\6\i\n\9\h\p\f\x\3\4\o\i\l\j\c\j\3\h\6\8\8\1\t\z\0\t\i\s\d\1\u\5\2\v\3\g\v\3\i\j\d\z\4\f\p\1\n\j\o\h\v\m\z\a\h\t\l\o\q\n\r\r\r\d\h\c\l\y\8\2\y\q\v\v\l\d\h\r\8\r\i\p\x\0\e\n\y\8\u\t\o\5\9\y\g\k\1\5\3\x\i\0\s\h\p\t\c\a\q\h\t\i\d\c\j\g\w\w\w\c\5\k\c\z\y\o\l\j\b\b\f\r\n\l\j\9\d\h\h\n\j\f\f\7\y\9\m\w\b\3\k\h\i\j\t\w\z\v\a\a\z\2\5\9\5\d\u\l\8\f\0\p\x\w\r\u\a\a\4\k\m\y\k\r\s\l\5\6\r\7\e\x\9\r\b\s\1\d\o\j\r\b\p\b\v\0\7\3\9\e\p\o\l\r\7\d\y\x\t\r\l\q\9\2\5\b\u\d\4\k\f\k\3\m\2\a\o\q\2\u\2\1\1\5\f\a\n\i\v\6\1\y\x\4\f\6\u\p\u\i\0\e\4\k\d\0\6\m\8\z\s\1\x\e\n\5\2\d\0\7\6\c\0\k\e\s\a\4\8\4\a\a\x\h\k\u\m\i\k\w\m\o\h\l\l\n\w\v\y\f\d\9\b\9\x\6\5\1\p\l\a\s\d\m\x\x\e\a\v\2\v\t\f\b\f\x\j\3\j\d\5\c\b\7\d\c\g\p\d\p\4\v\8\q\i\0\9\y\8\o\u\h\9\9\a\9\q\4\f\8\6\d\c\t\9\d\6\v\i\l\n\5\8\l\g\f\4\w\r\x\r\i\u\r\f\u\1\9\n\n\2\k\q\b\q\9\n\t\7\8\c\5\p\o\8\6\8\t\w\5\w\5\8\b\v\u\l\o\8\z\j\j\b\a\e\9\g\h\8\q\u\z\0\y\k\h\y\a\4\t\1\6\p\5\i\a\v\f\l\3\b\u\k\5\2\o\5\g\e\i\9\l\p\d\d\l\4\1\5\c\6\c\u\x\t\k\v\4\g\r\t\p\u\r\j\h\u\i\b\2\9\n\q\s\5\s\q\8\g\6\c\d\0\n\n\o\m\7\3\6\k\d\f\l\z\r\n\p\v\k\q\h\d\u\v\m\4\2\u\8\x\7\f\f\1\b\k\o\o\z\c\v\k\l\t\x\a\u\j\i\o\c\1\5\4\6\i\f\p\w\2\2\8\z\8\d\s\4\y\2\6\2\o\g\e\r\g\t\3\9\7\9\7\g\o\1\z\g\d\c\z\a\z\g\t\a\8\g\z\j\x\f\b\u\r\6\a\s\n\p\4\i\t\r\f\4\y\k\c\l\0\x\o\a\x\z\l\d\2\i\b\f\h\8\r\q\i\x\a\w\7\r\i\y\l\2\l\q\3\3\h\7\8\5\n\c\r\o\o\u\o\m\n\h\u\m\7\c\i\o\j\c\n\y\h\2\z\p\u\h\z\y\s\5\5\j\8\k\s\s\7\9\b\d\y\5\9\y\f\5\i\4\x\1\h\0\z\7\t\h\1\f\h\f\h\1\3\u\w\g\s\2\p\r\p\6\e\z\i\0\v\z\4\8\f\f\u\i\7\n\h\6\0\9\3\j\2\k\a\h\a\o\2\w\p\p\c\7\b\e\u\b\n\d\n\o\o\n\o\c\5\0\l\6\q\v\a\c\p\g\m\r\r\r\0\j\c\p\v\d\g\g\9\a\i\o\b\c\o\t\v\h\9\3\6\z\b\1\h\u\r\0\5\u\v\z\c\q\8\l\b\b\l\3\r\1\4\s\3\z\a\b\e\y\c\d\0\6\v\s\d\g\p\o\a\g\t\h\z\5\e\m\l\z\y\y\8\3\a\a\g\h\k\l\t\1\t\p\b\p\4\a\2\o\e\8\g\1\0\j\7\2\a\l\5\k\e\p\g\e\t\0\y\o\t\0\d\8\4\d\d\h\l\6\s\w\i\w\5\v\r\w\m\d\3\w\e\e\5\f\1\8\v\m\5\q\w\3\s\f\r\6\1\g\y\o\v\q\o\m\7\k\u\0\x\b\b\n\l\h\1\v\i\3\l\v\i\w\j\8\i\z\j\0\d\t\z\4\j\n\4\b\a\p\t\1\9\l\r\e\1\o\k\m\o\m\3\t\i\3\c\t\3\5\f\t\1\e\d\m\3\m\n\8\i\t\7\1\9\r\9\0\6\m\y\1\1\y\q\m\8\l\2\x\u\y\n\e\v\u\c\l\d\z\2\8\o\m\u\0\c\2\x\6\j\m\4\q\4\6\l\2\4\t\1\k\7\l\t\a\4\e\a\3\s\v\1\r\j\2\x\m\k\z\g\u\e\t\5\q\m\i\e\9\o\d\0\h\t\h\v\g\9\0\5\x\8\5\5\1\6\z\5\k\r\t\c\o\k\f\q\4\u\f\3\d\j\n\u\q\f\2\p\a\f\l\u\k\o\1\z\v\d\0\m\m\h\4\j\0\1\n\w\s\d\9\z\j\h\x\r\m\1\d\0\h\2\9\6\n\n\v\r\6\k\7\7\f\d\3\g\x\o\e\r\e\t\v\6\t\x\k\i\c\0\x\x\f\1\0\q\d\2\7\a\a\1\w\o\o\r\b\2\x\f\b\j\6\c\h\u\y\k\g\1\p\o\a\w\e\5\0\f\6\5\h\r\t\m\c\h\1\n\j\o\y\d\g\4\o\o\c\3\3\7\o\u\k\g\7\4\c\z\g\f\6\3\0\t\i\i\h\r\2\8\p\q\f\m\6\o\p\p\9\0\2\w\t\1\7\b\u\1\9\4\l\2\o\5\1\n\d\p\z\f\2\i\3\z\u\n\w\g\x\m\p\r\p\n\7\3\h\4\m\k\5\r\s\s\3\5\h\3\s\y\5\t\j\m\a\r\u\g\v\c\n\s\d\3\v\g\6\n\o\u\9\9\2\n\0\t\s\v\t\j\3\2\p\f\1\6\v\y\x\i\c\o\y\5\9\7\u\e\q\x\2\u\2\2\c\c\w\v\a\4\7\v\o\k\r\u\3\0\v\h\q\d\j\1\x\3\g\5\x\v\t\y\4\1\u\h\i\c\v\8\d\i\r\v\e\p\y\h\3\m\6\2\d\o\5\g\b\o\t\8\y\b\1\7\3\t\j\i\y\0\9\5\d\e\t\4\3\s\9\1\r\4\y\u\p\6\q\r\1\3\y\p\7\l\7\g\l\g\8\a\r\m\k\o\i\3\8\8\e\s\z\6\d\1\g\h\l\2\u\p\b\b\f\8\m\t\g\x\j\r\g\0\w\5\4\3\e\7\5\t\c\h\i\f\y\j\n\e\o\4\y\b\2\7\d\v\8\8\7\l\r\k\i\3\z\g\v\6\d\8\n\5\t\9\p\y\2\t\u\m\a\l\2\p\l\2\b\6\s\h\0\k\6\s\j\z\9\z\6\b\x\x\5\2\b\2\k\9\6\c\7\t\l\c\i\p\h\x\t\h\4\a\f\l\s\g\2\4\9\e\8\j\n\1\z\5\q\f\4\7\t\0\p\o\o\s\j\y\8\6\1\r\m\q\i\j\8\m\n\q\6\6\x\b\x\8\i\x\l\7\h\o\7\v\0\m\b\j\t\5\a\c\9\o\q\i\q\k\4\3\3\l\3\8\r\1\g\p\i\q\r\6\c\h\s\t\1\n\f\v\d\v\h\1\x\l\n\5\s\n\0\g\a\m\v\a\b\u\2\j\h\e\m\y\3\q\0\b\a\l\c\c\c\c\q\i\8\d\a\o\5\m\9\3\a\z\x\s\a\q\l\5\5\u\4\d\y\t\k\x\i\d\5\a\z\k\x\d\z\2\b\r\q\z\1\s\g\m\7\2\d\b\4\v\g\7\v\9\w\7\8\i\h\f\7\o\3\q\s\r\4\z\c\y\k\u\9\s\1\1\6\o\g\9\4\v\6\r\m\r\a\e\m\r\i\k\g\g\l\c\v\d\b\m\l\f\l\d\t\6\a\i\s\g\v\8\7\g\s\q\1\x\8\x\s\r\p\h\9\n\8\1\n\l\b\2\3\8\2\7\f\1\h\r\b\k\u\d\3\z\v\d\m\7\o\p\5\4\b\b\f\j\i\a\5\p\4\j\8\p\3\6\2\x\n\q\z\z\g\g\q\y\z\y\e\3\r\e\u\0\z\k\w\r\3\9\h\z\4\6\4\n\e\b\r\i\k\s\d\b\j\u\l\y\1\j\t\g\s\s\i\0\f\7\9\l\c\q\g\t\8\k\i\x\6\e\m\o\h\t\u\q\s\f\f\c\2\y\a\v\u\9\k\7\g\y\8\u\1\o\6\m\w\4\3\v\w\o\x\t\e\w\4\5\w\5\a\y\d\8\0\f\k\n\n\y\e\v\2\n\a\e\z\c\y\q\n\o\3\h\s\z\7\2\u\q\j\6\o\r\v\k\8\g\i\5\g\u\1\a\g\o\v\k\w\b\y\9\x\a\h\i\p\n\7\w\2\g\k\v\f\a\4\7\z\z\p\f\3\v\c\l\x\g\s\x\s\4\6\9\9\5\l\u\n\k\s\4\r\f\2\e\s\r\t\5\k\s\f\t\5\c\f\h\v\4\y\v\l\n\g\x\7\5\j\1\s\z\j\b\7\b\e\7\j\q\q\r\0\a\e\c\3\e\c\g\4\9\u\v\5\9\o\m\6\m\b\4\7\6\f\0\g\i\3\1\t\c\5\3\7\5\a\q\n\7\6\y\9\s\e\3\1\h\7\0\9\f\l\m\7\k\1\1\d\6\0\g\5\4\m\5\s\t\7\x\7\7\l\m\t\g\d\m\j\a\z\i\f\g\7\v\q\i\n\d\v\i\k\s\s\o\3\n\j\6\z\c\f\l\i\t\b\7\q\u\x\v\3\r\w\p\0\k\9\g\w\a\z\7\9\0\6\h\c\3\7\p\e\j\n\m\o\k\o\f\h\e\k\w\b\l\4\9\c\n\g\4\k\2\q\j\u\l\8\5\a\z\9\k\1\2\x\9\2\k\x\5\f\o\t\g\s\8\t\k\d\w\7\j\3\n\p\q\s\t\9\r\d\4\0\7\t\c\9\b\n\j\p\8\q\9\j\c\j\l\1\4\q\r\6\n\m\j\b\7\2\n\1\7\7\t\h\n\f\y\0\6\0\4\g\8\g\a\0\u\p\2\w\7\9\a\y\f\p\4\j\s\n\l\8\a\o\z\t\2\4\3\3\d\w\e\3\z\l\l\n\u\n\j\7\r\e\y\6\t\8\w\5\u\l\k\l\x\1\k\m\7\j\2\3\g\7\f\j\m\g\n\a\k\f\3\z\4\2\9\t\q\g\l\1\t\e\2\g\8\2\p\d\r\2\i\7\k\6\e\1\v\6\k\p\h\h\r\f\t\c\x\1\k\q\c\5\f\q\e\6\8\f\7\0\f\v\g\1\9\4\a\7\f\h\x\1\j\w\q\2\v\i\i\t\u\r\f\o\c\a\f\3\k\d\v\t\j\f\9\r\6\u\q\r\3\7\6\a\t\i\3\5\6\m\u\j\5\h\s\8\l\p\b\d\x\o\k\q\0\3\o\a\v\r\7\b\7\u\t\p\l\u\t\9\0\w\y\t\c\d\o\8\6\6\9\r\p\7\n\w\j\4\s\s\i\7\n\6\v\f\p\i\q\r\x\p\7\y\m\y\j\i\j\2\q\e\4\i\e\k\6\x\i\7\a\q\e\y\n\d\7\f\a\1\q\c\g\f\m\v\a\2\h\x\5\z\1\v\0\9\3\q\p\6\x\j\o\s\l\5\x\5\b\v\j\9\o\y\g\f\g\i\q\n\s\c\b\c\q\p\a\n\d\u\0\y\e\j\1\j\3\o\l\d\0\t\s\v\n\s\6\i\t\w\5\k\3\l\w\i\1\v\n\b\x\y\5\n\7\n\5\2\6\2\q\0\a\h\k\x\j\0\4\n\v\e\n\n\4\9\6\s\n\d\j\c\h\p\4\j\v\c\b\s\w\e\8\7\n\n\s\h\5\2\d\n\s\9\v\g\2\d\o\6\k\n\0\b\j\w\g\x\9\j\k\i\m\6\b\a\v\r\m\n\v\t\p\9\p\4\i\h\t\d\j\1\n\r\3\c\c\s\6\z\n\0\i\r\x\f\d\7\c\i\i\9\m\o\l\2\l\y\w\c\y\j\5\6\q\6\w\g\8\c\g\6\s\u\k\v\4\p\a\8\z\0\n\1\0\t\r\l\f\0\q\h\e\s\5\8\t\v\1\d\w\d\1\s\1\a\z\n\4\k\3\6\v\f\0\u\x\2\4\6\r\w\6\m\l\c\b\4\w\f\0\2\w\e\g\0\p\f\k\9\6\f\z\s\r\v\g\b\y\a\1\z\x\x\z\c\o\s\w\j\d\y\f\a\k\k\n\u\6\y\4\b\8\t\t\r\9\e\g\x\g\5\1\9\o\l\n\5\e\2\8\g\6\b\o\o\f\a\2\0\1\g\x\b\t\1\o\u\g\d\m\i\l\8\r\l\s\s\6\d\b\b\2\g\1\l\t\r\w\s\w\v\p\v\t\0\t\0\l\h\d\q\6\t\d\j\n\9\o\w\4\q\w\9\j\x\1\5\i\9\j\x\0\4\f\p\m\9\7\1\8\k\y\p\s\1\j\q\1\w\v\0\v\z\b\8\i\e\z\o\k\8\8\x\j\f\4\6\7\w\l\3\5\u\f\o\v\3\9\m\s\g\8\f\a\q\b\i\k\i\i\v\k\3\w\2\s\h\k\y\5\5\n\4\x\d\0\y\l\l\c\a\o\t\7\3\m\q\8\w\8\9\v\3\2\c\e\5\g\m\c\h\n\4\g\i\2\g\a\o\n\9\b\y\h\p\8\4\r\b\2\s\3\m\d\1\f\6\r\t\v\p\l\8\8\2\g\0\s\e\m\e\b\7\d\p\t\1\r\e\6\z\r\n\3\a\2\p\3\1\w\x\0\q\p\b\4\i\s\v\w\c\g\q\5\e\q\a\q\z\r\x\g\t\f\v\2\0\2\a\t\f\7\n\z\i\o\c\n\e\o\e\r\x\s\k\q\3\s\v\u\j\8\2\u\0\x\4\y\w\x\p\n\9\o\4\x\x\v\8\b\y\7\g\8\n\a\g\o\5\c\h\k\v\5\9\p\t\a\8\n\s\2\k\q\2\c\n\o\e\g\t\r\q\p\n\k\9\4\a\q\o\q\4\2\4\u\0\s\5\q\9\9\p\e\c\7\2\b\1\4\y\z\j\o\s\2\0\t\9\e\3\j\z\l\3\8\h\s\b\p\z\d\7\v\1\b\m\7\r\b\2\9\4\d\w\t\x\i\r\h\r\f\y\e\y\b\w\o\r\2\o\q\a\0\v\9\c\w\o\a\n\e\l\v\x\j\6\r\j\d\3\c\6\g\c\1\p\v\b\k\y\f\8\m\s\o\a\5\v\r\b\d\8\g\w\u\x\c\8\4\c\u\3\2\8\8\k\q\z\o\n\v\8\r\g\l\w\8\m\v\s\7\i\e\p\d\p\g\6\0\x\n\r\s\n\h\y\w\d\u\z\4\h\w\l\q\s\k\c\k\f\e\n\c\v\g\p\o\y\c\w\2\4\6\0\c\7\a\9\1\z\h\e\g\v\d\g\5\p\k\h\t\2\l\b\w\2\k\6\1\6\c\w\6\n\o\p\x\2\b\0\r\s\a\8\m\j\2\a\7\7\i\v\f\t\w\t\3\9\0\v\4\8\x\r\j\6\t\6\9\b\d\j\p\x\e\l\f\c\y\n\t\6\i\5\1\t\o\f\1\1\9\h\x\9\2\w\p\h\w\f\f\8\n\b\j\d\e\o\2\f\z\9\5\k\a\8\i\6\b\2\j\o\o\k\f\i\d\3\6\a\s\s\u\k\n\p\e\o\u\w\w\0\g\a\i\1\f\z\1\4\f\g\q\r\3\h\1\1\v\f\u\i\o\v\u\i\c\0\9\e\n\i\j\e\o\0\q\d\d\l\d\p\c\g\v\a\a\e\k\0\s\p\h\8\l\1\l\x\9\e\e\1\k ]] 00:38:34.814 00:38:34.814 real 0m1.157s 00:38:34.814 user 0m0.843s 00:38:34.814 sys 0m0.412s 00:38:34.814 ************************************ 00:38:34.814 END TEST dd_rw_offset 00:38:34.814 ************************************ 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:34.814 05:29:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:35.118 [2024-12-09 05:29:21.796686] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:35.118 [2024-12-09 05:29:21.796968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:38:35.118 { 00:38:35.118 "subsystems": [ 00:38:35.118 { 00:38:35.118 "subsystem": "bdev", 00:38:35.118 "config": [ 00:38:35.118 { 00:38:35.118 "params": { 00:38:35.118 "trtype": "pcie", 00:38:35.118 "traddr": "0000:00:10.0", 00:38:35.118 "name": "Nvme0" 00:38:35.118 }, 00:38:35.118 "method": "bdev_nvme_attach_controller" 00:38:35.118 }, 00:38:35.118 { 00:38:35.118 "method": "bdev_wait_for_examine" 00:38:35.118 } 00:38:35.118 ] 00:38:35.118 } 00:38:35.118 ] 00:38:35.118 } 00:38:35.118 [2024-12-09 05:29:21.947912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.118 [2024-12-09 05:29:21.987214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.118 [2024-12-09 05:29:22.019998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:35.375  [2024-12-09T05:29:22.347Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:35.375 00:38:35.375 05:29:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:35.375 ************************************ 00:38:35.375 END TEST spdk_dd_basic_rw 00:38:35.375 ************************************ 00:38:35.375 00:38:35.375 real 0m16.291s 00:38:35.375 user 0m12.281s 00:38:35.375 sys 0m4.656s 00:38:35.375 05:29:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.375 05:29:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:35.375 05:29:22 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:35.375 05:29:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:35.375 05:29:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.375 05:29:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:35.634 ************************************ 00:38:35.634 START TEST spdk_dd_posix 00:38:35.634 ************************************ 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:35.634 * Looking for test storage... 00:38:35.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:35.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.634 --rc genhtml_branch_coverage=1 00:38:35.634 --rc genhtml_function_coverage=1 00:38:35.634 --rc genhtml_legend=1 00:38:35.634 --rc geninfo_all_blocks=1 00:38:35.634 --rc geninfo_unexecuted_blocks=1 00:38:35.634 00:38:35.634 ' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:35.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.634 --rc genhtml_branch_coverage=1 00:38:35.634 --rc genhtml_function_coverage=1 00:38:35.634 --rc genhtml_legend=1 00:38:35.634 --rc geninfo_all_blocks=1 00:38:35.634 --rc geninfo_unexecuted_blocks=1 00:38:35.634 00:38:35.634 ' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:35.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.634 --rc genhtml_branch_coverage=1 00:38:35.634 --rc genhtml_function_coverage=1 00:38:35.634 --rc genhtml_legend=1 00:38:35.634 --rc geninfo_all_blocks=1 00:38:35.634 --rc geninfo_unexecuted_blocks=1 00:38:35.634 00:38:35.634 ' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:35.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.634 --rc genhtml_branch_coverage=1 00:38:35.634 --rc genhtml_function_coverage=1 00:38:35.634 --rc genhtml_legend=1 00:38:35.634 --rc geninfo_all_blocks=1 00:38:35.634 --rc geninfo_unexecuted_blocks=1 00:38:35.634 00:38:35.634 ' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:38:35.634 * First test run, liburing in use 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:35.634 ************************************ 00:38:35.634 START TEST dd_flag_append 00:38:35.634 ************************************ 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:35.634 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=zv1imkawr2mepxvqsfxqluwsqc8clvhs 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=9q6c7zu8mbntgbdlrgehax2vgp06vbkr 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s zv1imkawr2mepxvqsfxqluwsqc8clvhs 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 9q6c7zu8mbntgbdlrgehax2vgp06vbkr 00:38:35.635 05:29:22 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:35.892 [2024-12-09 05:29:22.614325] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:35.892 [2024-12-09 05:29:22.614428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:38:35.892 [2024-12-09 05:29:22.769886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.892 [2024-12-09 05:29:22.810110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.892 [2024-12-09 05:29:22.844557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:36.151  [2024-12-09T05:29:23.123Z] Copying: 32/32 [B] (average 31 kBps) 00:38:36.151 00:38:36.151 ************************************ 00:38:36.151 END TEST dd_flag_append 00:38:36.151 ************************************ 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 9q6c7zu8mbntgbdlrgehax2vgp06vbkrzv1imkawr2mepxvqsfxqluwsqc8clvhs == \9\q\6\c\7\z\u\8\m\b\n\t\g\b\d\l\r\g\e\h\a\x\2\v\g\p\0\6\v\b\k\r\z\v\1\i\m\k\a\w\r\2\m\e\p\x\v\q\s\f\x\q\l\u\w\s\q\c\8\c\l\v\h\s ]] 00:38:36.151 00:38:36.151 real 0m0.508s 00:38:36.151 user 0m0.297s 00:38:36.151 sys 0m0.190s 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:36.151 ************************************ 00:38:36.151 START TEST dd_flag_directory 00:38:36.151 ************************************ 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:36.151 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:36.409 [2024-12-09 05:29:23.167702] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:36.409 [2024-12-09 05:29:23.167807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60042 ] 00:38:36.409 [2024-12-09 05:29:23.320332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.409 [2024-12-09 05:29:23.353514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.667 [2024-12-09 05:29:23.383039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:36.667 [2024-12-09 05:29:23.402756] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:36.667 [2024-12-09 05:29:23.402816] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:36.667 [2024-12-09 05:29:23.402837] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:36.667 [2024-12-09 05:29:23.471052] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:36.667 05:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:36.667 [2024-12-09 05:29:23.634918] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:36.667 [2024-12-09 05:29:23.635014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60046 ] 00:38:36.926 [2024-12-09 05:29:23.786197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.926 [2024-12-09 05:29:23.819595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.926 [2024-12-09 05:29:23.849616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:36.926 [2024-12-09 05:29:23.869618] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:36.926 [2024-12-09 05:29:23.869670] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:36.926 [2024-12-09 05:29:23.869690] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:37.184 [2024-12-09 05:29:23.937009] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:38:37.184 ************************************ 00:38:37.184 END TEST dd_flag_directory 00:38:37.184 ************************************ 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:37.184 00:38:37.184 real 0m1.005s 00:38:37.184 user 0m0.609s 00:38:37.184 sys 0m0.187s 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:37.184 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:37.456 ************************************ 00:38:37.456 START TEST dd_flag_nofollow 00:38:37.456 ************************************ 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:37.456 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:37.456 [2024-12-09 05:29:24.226918] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:37.456 [2024-12-09 05:29:24.227018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60080 ] 00:38:37.456 [2024-12-09 05:29:24.387920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.714 [2024-12-09 05:29:24.426498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.714 [2024-12-09 05:29:24.459160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:37.714 [2024-12-09 05:29:24.480989] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:37.715 [2024-12-09 05:29:24.481061] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:37.715 [2024-12-09 05:29:24.481085] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:37.715 [2024-12-09 05:29:24.551790] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:37.715 05:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:37.973 [2024-12-09 05:29:24.722930] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:37.973 [2024-12-09 05:29:24.723258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:38:37.973 [2024-12-09 05:29:24.874085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.973 [2024-12-09 05:29:24.907958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.973 [2024-12-09 05:29:24.937145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:38.232 [2024-12-09 05:29:24.961546] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:38.232 [2024-12-09 05:29:24.961657] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:38.232 [2024-12-09 05:29:24.961691] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:38.232 [2024-12-09 05:29:25.028687] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:38.232 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:38.232 [2024-12-09 05:29:25.192627] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:38.232 [2024-12-09 05:29:25.192720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:38:38.490 [2024-12-09 05:29:25.339443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.490 [2024-12-09 05:29:25.372089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.490 [2024-12-09 05:29:25.400650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:38.490  [2024-12-09T05:29:25.721Z] Copying: 512/512 [B] (average 500 kBps) 00:38:38.749 00:38:38.749 ************************************ 00:38:38.749 END TEST dd_flag_nofollow 00:38:38.749 ************************************ 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ifruql6vkm54533itrnvosahk66aioynyerv3q4l67ko15e36hlfgmy7uajuqpm04t6tvac89i4jmhflzkcc642trmf8lbg6r2c84xzb962mz7hfr1gaqknmwbpdb95fkcahef90kxfx4orgjppht7uwfdj6532nkliii8saabihpmpu58a74ai5bhzsycxpb9ojiiy3jug9wtttfhffl0f6ms8xl1w6ys2ey55vhgn4uud8am8673yj1um2136ftrjqelrufow1xy8zpn5se2lhlmmt5znmlmd02u2gsrohwz9x4cvfvnwje4gzk9h4p6m0qtjffwuo135832uozg4yh5vjzx5hoapn5iez0xtz5oi8cx4zsobu62p8sydo0tmzgeydorpa98m7ksto7sg48055cm1xnpq5gevam7o71bv53z9e2xby53llfve1bq8qit2s70ixsazfssiqizdi5wn90dqayqbe34st4wj2hvbcgb9738kw78c9xpxu == \i\f\r\u\q\l\6\v\k\m\5\4\5\3\3\i\t\r\n\v\o\s\a\h\k\6\6\a\i\o\y\n\y\e\r\v\3\q\4\l\6\7\k\o\1\5\e\3\6\h\l\f\g\m\y\7\u\a\j\u\q\p\m\0\4\t\6\t\v\a\c\8\9\i\4\j\m\h\f\l\z\k\c\c\6\4\2\t\r\m\f\8\l\b\g\6\r\2\c\8\4\x\z\b\9\6\2\m\z\7\h\f\r\1\g\a\q\k\n\m\w\b\p\d\b\9\5\f\k\c\a\h\e\f\9\0\k\x\f\x\4\o\r\g\j\p\p\h\t\7\u\w\f\d\j\6\5\3\2\n\k\l\i\i\i\8\s\a\a\b\i\h\p\m\p\u\5\8\a\7\4\a\i\5\b\h\z\s\y\c\x\p\b\9\o\j\i\i\y\3\j\u\g\9\w\t\t\t\f\h\f\f\l\0\f\6\m\s\8\x\l\1\w\6\y\s\2\e\y\5\5\v\h\g\n\4\u\u\d\8\a\m\8\6\7\3\y\j\1\u\m\2\1\3\6\f\t\r\j\q\e\l\r\u\f\o\w\1\x\y\8\z\p\n\5\s\e\2\l\h\l\m\m\t\5\z\n\m\l\m\d\0\2\u\2\g\s\r\o\h\w\z\9\x\4\c\v\f\v\n\w\j\e\4\g\z\k\9\h\4\p\6\m\0\q\t\j\f\f\w\u\o\1\3\5\8\3\2\u\o\z\g\4\y\h\5\v\j\z\x\5\h\o\a\p\n\5\i\e\z\0\x\t\z\5\o\i\8\c\x\4\z\s\o\b\u\6\2\p\8\s\y\d\o\0\t\m\z\g\e\y\d\o\r\p\a\9\8\m\7\k\s\t\o\7\s\g\4\8\0\5\5\c\m\1\x\n\p\q\5\g\e\v\a\m\7\o\7\1\b\v\5\3\z\9\e\2\x\b\y\5\3\l\l\f\v\e\1\b\q\8\q\i\t\2\s\7\0\i\x\s\a\z\f\s\s\i\q\i\z\d\i\5\w\n\9\0\d\q\a\y\q\b\e\3\4\s\t\4\w\j\2\h\v\b\c\g\b\9\7\3\8\k\w\7\8\c\9\x\p\x\u ]] 00:38:38.749 00:38:38.749 real 0m1.431s 00:38:38.749 user 0m0.818s 00:38:38.749 sys 0m0.375s 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:38.749 ************************************ 00:38:38.749 START TEST dd_flag_noatime 00:38:38.749 ************************************ 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733722165 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733722165 00:38:38.749 05:29:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:38:40.121 05:29:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:40.121 [2024-12-09 05:29:26.716735] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:40.121 [2024-12-09 05:29:26.716832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60134 ] 00:38:40.121 [2024-12-09 05:29:26.873602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.121 [2024-12-09 05:29:26.912828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.121 [2024-12-09 05:29:26.944456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:40.121  [2024-12-09T05:29:27.350Z] Copying: 512/512 [B] (average 500 kBps) 00:38:40.378 00:38:40.378 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:40.378 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733722165 )) 00:38:40.378 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:40.378 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733722165 )) 00:38:40.378 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:40.378 [2024-12-09 05:29:27.202337] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:40.378 [2024-12-09 05:29:27.202464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:38:40.636 [2024-12-09 05:29:27.352742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.636 [2024-12-09 05:29:27.385581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.636 [2024-12-09 05:29:27.414167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:40.636  [2024-12-09T05:29:27.608Z] Copying: 512/512 [B] (average 500 kBps) 00:38:40.636 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733722167 )) 00:38:40.893 00:38:40.893 real 0m1.972s 00:38:40.893 user 0m0.554s 00:38:40.893 sys 0m0.367s 00:38:40.893 ************************************ 00:38:40.893 END TEST dd_flag_noatime 00:38:40.893 ************************************ 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:40.893 ************************************ 00:38:40.893 START TEST dd_flags_misc 00:38:40.893 ************************************ 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:40.893 05:29:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:40.893 [2024-12-09 05:29:27.721119] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:40.893 [2024-12-09 05:29:27.721212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60176 ] 00:38:41.151 [2024-12-09 05:29:27.866643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.151 [2024-12-09 05:29:27.899134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.151 [2024-12-09 05:29:27.928058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:41.151  [2024-12-09T05:29:28.380Z] Copying: 512/512 [B] (average 500 kBps) 00:38:41.408 00:38:41.408 05:29:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sqr9s6j7bq2nglc1v5uyytysdve11f0hfc8h7qm6hypeu56598rmsfko2c25ytv5siy7a3zr8nzn7xksry9ki6xeobmowdtr3nudq7u9dcpkr6qgpon1pgywxgumoewkjibyk49jn5bdi8nsesbbr0ekj6qm0i7mcdfkz9gmsj25gvmm2i7wv44g1ujnf54v9kolypwz23d7w1o2rqd2d5jgap6f3ml4zi9370tsxlaivkprwge15mt37zb05s8om3fm076qls1tr5ro9o9xn25whpuyaiebtzpriifszznj5s4xp0kdj685yx13bgyexafli3lq5y06v4c723i7ifjsw3r3bkx8n58iqj0hcggr8cca279hu8cskijy2q1zaukfcyy99ayxyorqvi4ylmgowutg825ocu59ui1t9qiamnupd1bm71li54x1nkeo0hsqqnh5wnpprl6id3tj6kqdm3j2kcxrixy77a03qilmvnv1z23b0q0dwxm1e2p7 == \s\q\r\9\s\6\j\7\b\q\2\n\g\l\c\1\v\5\u\y\y\t\y\s\d\v\e\1\1\f\0\h\f\c\8\h\7\q\m\6\h\y\p\e\u\5\6\5\9\8\r\m\s\f\k\o\2\c\2\5\y\t\v\5\s\i\y\7\a\3\z\r\8\n\z\n\7\x\k\s\r\y\9\k\i\6\x\e\o\b\m\o\w\d\t\r\3\n\u\d\q\7\u\9\d\c\p\k\r\6\q\g\p\o\n\1\p\g\y\w\x\g\u\m\o\e\w\k\j\i\b\y\k\4\9\j\n\5\b\d\i\8\n\s\e\s\b\b\r\0\e\k\j\6\q\m\0\i\7\m\c\d\f\k\z\9\g\m\s\j\2\5\g\v\m\m\2\i\7\w\v\4\4\g\1\u\j\n\f\5\4\v\9\k\o\l\y\p\w\z\2\3\d\7\w\1\o\2\r\q\d\2\d\5\j\g\a\p\6\f\3\m\l\4\z\i\9\3\7\0\t\s\x\l\a\i\v\k\p\r\w\g\e\1\5\m\t\3\7\z\b\0\5\s\8\o\m\3\f\m\0\7\6\q\l\s\1\t\r\5\r\o\9\o\9\x\n\2\5\w\h\p\u\y\a\i\e\b\t\z\p\r\i\i\f\s\z\z\n\j\5\s\4\x\p\0\k\d\j\6\8\5\y\x\1\3\b\g\y\e\x\a\f\l\i\3\l\q\5\y\0\6\v\4\c\7\2\3\i\7\i\f\j\s\w\3\r\3\b\k\x\8\n\5\8\i\q\j\0\h\c\g\g\r\8\c\c\a\2\7\9\h\u\8\c\s\k\i\j\y\2\q\1\z\a\u\k\f\c\y\y\9\9\a\y\x\y\o\r\q\v\i\4\y\l\m\g\o\w\u\t\g\8\2\5\o\c\u\5\9\u\i\1\t\9\q\i\a\m\n\u\p\d\1\b\m\7\1\l\i\5\4\x\1\n\k\e\o\0\h\s\q\q\n\h\5\w\n\p\p\r\l\6\i\d\3\t\j\6\k\q\d\m\3\j\2\k\c\x\r\i\x\y\7\7\a\0\3\q\i\l\m\v\n\v\1\z\2\3\b\0\q\0\d\w\x\m\1\e\2\p\7 ]] 00:38:41.408 05:29:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:41.408 05:29:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:41.408 [2024-12-09 05:29:28.168634] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:41.408 [2024-12-09 05:29:28.168732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:38:41.408 [2024-12-09 05:29:28.313714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.408 [2024-12-09 05:29:28.346726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.408 [2024-12-09 05:29:28.375825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:41.665  [2024-12-09T05:29:28.638Z] Copying: 512/512 [B] (average 500 kBps) 00:38:41.666 00:38:41.666 05:29:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sqr9s6j7bq2nglc1v5uyytysdve11f0hfc8h7qm6hypeu56598rmsfko2c25ytv5siy7a3zr8nzn7xksry9ki6xeobmowdtr3nudq7u9dcpkr6qgpon1pgywxgumoewkjibyk49jn5bdi8nsesbbr0ekj6qm0i7mcdfkz9gmsj25gvmm2i7wv44g1ujnf54v9kolypwz23d7w1o2rqd2d5jgap6f3ml4zi9370tsxlaivkprwge15mt37zb05s8om3fm076qls1tr5ro9o9xn25whpuyaiebtzpriifszznj5s4xp0kdj685yx13bgyexafli3lq5y06v4c723i7ifjsw3r3bkx8n58iqj0hcggr8cca279hu8cskijy2q1zaukfcyy99ayxyorqvi4ylmgowutg825ocu59ui1t9qiamnupd1bm71li54x1nkeo0hsqqnh5wnpprl6id3tj6kqdm3j2kcxrixy77a03qilmvnv1z23b0q0dwxm1e2p7 == \s\q\r\9\s\6\j\7\b\q\2\n\g\l\c\1\v\5\u\y\y\t\y\s\d\v\e\1\1\f\0\h\f\c\8\h\7\q\m\6\h\y\p\e\u\5\6\5\9\8\r\m\s\f\k\o\2\c\2\5\y\t\v\5\s\i\y\7\a\3\z\r\8\n\z\n\7\x\k\s\r\y\9\k\i\6\x\e\o\b\m\o\w\d\t\r\3\n\u\d\q\7\u\9\d\c\p\k\r\6\q\g\p\o\n\1\p\g\y\w\x\g\u\m\o\e\w\k\j\i\b\y\k\4\9\j\n\5\b\d\i\8\n\s\e\s\b\b\r\0\e\k\j\6\q\m\0\i\7\m\c\d\f\k\z\9\g\m\s\j\2\5\g\v\m\m\2\i\7\w\v\4\4\g\1\u\j\n\f\5\4\v\9\k\o\l\y\p\w\z\2\3\d\7\w\1\o\2\r\q\d\2\d\5\j\g\a\p\6\f\3\m\l\4\z\i\9\3\7\0\t\s\x\l\a\i\v\k\p\r\w\g\e\1\5\m\t\3\7\z\b\0\5\s\8\o\m\3\f\m\0\7\6\q\l\s\1\t\r\5\r\o\9\o\9\x\n\2\5\w\h\p\u\y\a\i\e\b\t\z\p\r\i\i\f\s\z\z\n\j\5\s\4\x\p\0\k\d\j\6\8\5\y\x\1\3\b\g\y\e\x\a\f\l\i\3\l\q\5\y\0\6\v\4\c\7\2\3\i\7\i\f\j\s\w\3\r\3\b\k\x\8\n\5\8\i\q\j\0\h\c\g\g\r\8\c\c\a\2\7\9\h\u\8\c\s\k\i\j\y\2\q\1\z\a\u\k\f\c\y\y\9\9\a\y\x\y\o\r\q\v\i\4\y\l\m\g\o\w\u\t\g\8\2\5\o\c\u\5\9\u\i\1\t\9\q\i\a\m\n\u\p\d\1\b\m\7\1\l\i\5\4\x\1\n\k\e\o\0\h\s\q\q\n\h\5\w\n\p\p\r\l\6\i\d\3\t\j\6\k\q\d\m\3\j\2\k\c\x\r\i\x\y\7\7\a\0\3\q\i\l\m\v\n\v\1\z\2\3\b\0\q\0\d\w\x\m\1\e\2\p\7 ]] 00:38:41.666 05:29:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:41.666 05:29:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:41.666 [2024-12-09 05:29:28.608371] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:41.666 [2024-12-09 05:29:28.608458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:38:41.923 [2024-12-09 05:29:28.754781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.923 [2024-12-09 05:29:28.787129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.923 [2024-12-09 05:29:28.815660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:41.923  [2024-12-09T05:29:29.153Z] Copying: 512/512 [B] (average 125 kBps) 00:38:42.181 00:38:42.181 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sqr9s6j7bq2nglc1v5uyytysdve11f0hfc8h7qm6hypeu56598rmsfko2c25ytv5siy7a3zr8nzn7xksry9ki6xeobmowdtr3nudq7u9dcpkr6qgpon1pgywxgumoewkjibyk49jn5bdi8nsesbbr0ekj6qm0i7mcdfkz9gmsj25gvmm2i7wv44g1ujnf54v9kolypwz23d7w1o2rqd2d5jgap6f3ml4zi9370tsxlaivkprwge15mt37zb05s8om3fm076qls1tr5ro9o9xn25whpuyaiebtzpriifszznj5s4xp0kdj685yx13bgyexafli3lq5y06v4c723i7ifjsw3r3bkx8n58iqj0hcggr8cca279hu8cskijy2q1zaukfcyy99ayxyorqvi4ylmgowutg825ocu59ui1t9qiamnupd1bm71li54x1nkeo0hsqqnh5wnpprl6id3tj6kqdm3j2kcxrixy77a03qilmvnv1z23b0q0dwxm1e2p7 == \s\q\r\9\s\6\j\7\b\q\2\n\g\l\c\1\v\5\u\y\y\t\y\s\d\v\e\1\1\f\0\h\f\c\8\h\7\q\m\6\h\y\p\e\u\5\6\5\9\8\r\m\s\f\k\o\2\c\2\5\y\t\v\5\s\i\y\7\a\3\z\r\8\n\z\n\7\x\k\s\r\y\9\k\i\6\x\e\o\b\m\o\w\d\t\r\3\n\u\d\q\7\u\9\d\c\p\k\r\6\q\g\p\o\n\1\p\g\y\w\x\g\u\m\o\e\w\k\j\i\b\y\k\4\9\j\n\5\b\d\i\8\n\s\e\s\b\b\r\0\e\k\j\6\q\m\0\i\7\m\c\d\f\k\z\9\g\m\s\j\2\5\g\v\m\m\2\i\7\w\v\4\4\g\1\u\j\n\f\5\4\v\9\k\o\l\y\p\w\z\2\3\d\7\w\1\o\2\r\q\d\2\d\5\j\g\a\p\6\f\3\m\l\4\z\i\9\3\7\0\t\s\x\l\a\i\v\k\p\r\w\g\e\1\5\m\t\3\7\z\b\0\5\s\8\o\m\3\f\m\0\7\6\q\l\s\1\t\r\5\r\o\9\o\9\x\n\2\5\w\h\p\u\y\a\i\e\b\t\z\p\r\i\i\f\s\z\z\n\j\5\s\4\x\p\0\k\d\j\6\8\5\y\x\1\3\b\g\y\e\x\a\f\l\i\3\l\q\5\y\0\6\v\4\c\7\2\3\i\7\i\f\j\s\w\3\r\3\b\k\x\8\n\5\8\i\q\j\0\h\c\g\g\r\8\c\c\a\2\7\9\h\u\8\c\s\k\i\j\y\2\q\1\z\a\u\k\f\c\y\y\9\9\a\y\x\y\o\r\q\v\i\4\y\l\m\g\o\w\u\t\g\8\2\5\o\c\u\5\9\u\i\1\t\9\q\i\a\m\n\u\p\d\1\b\m\7\1\l\i\5\4\x\1\n\k\e\o\0\h\s\q\q\n\h\5\w\n\p\p\r\l\6\i\d\3\t\j\6\k\q\d\m\3\j\2\k\c\x\r\i\x\y\7\7\a\0\3\q\i\l\m\v\n\v\1\z\2\3\b\0\q\0\d\w\x\m\1\e\2\p\7 ]] 00:38:42.181 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:42.181 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:42.181 [2024-12-09 05:29:29.054442] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:42.181 [2024-12-09 05:29:29.054563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:38:42.438 [2024-12-09 05:29:29.208665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.438 [2024-12-09 05:29:29.248805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.438 [2024-12-09 05:29:29.282434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:42.438  [2024-12-09T05:29:29.668Z] Copying: 512/512 [B] (average 500 kBps) 00:38:42.696 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sqr9s6j7bq2nglc1v5uyytysdve11f0hfc8h7qm6hypeu56598rmsfko2c25ytv5siy7a3zr8nzn7xksry9ki6xeobmowdtr3nudq7u9dcpkr6qgpon1pgywxgumoewkjibyk49jn5bdi8nsesbbr0ekj6qm0i7mcdfkz9gmsj25gvmm2i7wv44g1ujnf54v9kolypwz23d7w1o2rqd2d5jgap6f3ml4zi9370tsxlaivkprwge15mt37zb05s8om3fm076qls1tr5ro9o9xn25whpuyaiebtzpriifszznj5s4xp0kdj685yx13bgyexafli3lq5y06v4c723i7ifjsw3r3bkx8n58iqj0hcggr8cca279hu8cskijy2q1zaukfcyy99ayxyorqvi4ylmgowutg825ocu59ui1t9qiamnupd1bm71li54x1nkeo0hsqqnh5wnpprl6id3tj6kqdm3j2kcxrixy77a03qilmvnv1z23b0q0dwxm1e2p7 == \s\q\r\9\s\6\j\7\b\q\2\n\g\l\c\1\v\5\u\y\y\t\y\s\d\v\e\1\1\f\0\h\f\c\8\h\7\q\m\6\h\y\p\e\u\5\6\5\9\8\r\m\s\f\k\o\2\c\2\5\y\t\v\5\s\i\y\7\a\3\z\r\8\n\z\n\7\x\k\s\r\y\9\k\i\6\x\e\o\b\m\o\w\d\t\r\3\n\u\d\q\7\u\9\d\c\p\k\r\6\q\g\p\o\n\1\p\g\y\w\x\g\u\m\o\e\w\k\j\i\b\y\k\4\9\j\n\5\b\d\i\8\n\s\e\s\b\b\r\0\e\k\j\6\q\m\0\i\7\m\c\d\f\k\z\9\g\m\s\j\2\5\g\v\m\m\2\i\7\w\v\4\4\g\1\u\j\n\f\5\4\v\9\k\o\l\y\p\w\z\2\3\d\7\w\1\o\2\r\q\d\2\d\5\j\g\a\p\6\f\3\m\l\4\z\i\9\3\7\0\t\s\x\l\a\i\v\k\p\r\w\g\e\1\5\m\t\3\7\z\b\0\5\s\8\o\m\3\f\m\0\7\6\q\l\s\1\t\r\5\r\o\9\o\9\x\n\2\5\w\h\p\u\y\a\i\e\b\t\z\p\r\i\i\f\s\z\z\n\j\5\s\4\x\p\0\k\d\j\6\8\5\y\x\1\3\b\g\y\e\x\a\f\l\i\3\l\q\5\y\0\6\v\4\c\7\2\3\i\7\i\f\j\s\w\3\r\3\b\k\x\8\n\5\8\i\q\j\0\h\c\g\g\r\8\c\c\a\2\7\9\h\u\8\c\s\k\i\j\y\2\q\1\z\a\u\k\f\c\y\y\9\9\a\y\x\y\o\r\q\v\i\4\y\l\m\g\o\w\u\t\g\8\2\5\o\c\u\5\9\u\i\1\t\9\q\i\a\m\n\u\p\d\1\b\m\7\1\l\i\5\4\x\1\n\k\e\o\0\h\s\q\q\n\h\5\w\n\p\p\r\l\6\i\d\3\t\j\6\k\q\d\m\3\j\2\k\c\x\r\i\x\y\7\7\a\0\3\q\i\l\m\v\n\v\1\z\2\3\b\0\q\0\d\w\x\m\1\e\2\p\7 ]] 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:42.696 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:42.696 [2024-12-09 05:29:29.578592] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:42.696 [2024-12-09 05:29:29.578699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:38:42.954 [2024-12-09 05:29:29.724232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.954 [2024-12-09 05:29:29.757657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.954 [2024-12-09 05:29:29.787287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:42.954  [2024-12-09T05:29:30.183Z] Copying: 512/512 [B] (average 500 kBps) 00:38:43.211 00:38:43.212 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pm9bqk0ur0lhqmy1cdgaoz5bomqsoigm4dcqn3z7jqgzjjzimbhb6bwj3ld4x46yt72txn0nz0m97br63wnptwfvucyfno8fq61p4v92ijyem7xbwvf6kkvt7twrn3smq2ge7h0tbpqmtt5gqobarrikc1n029nyfhnt1mza2ztea10hlw9jhy3h4jvs3nmjs6dsyfm7xl7ngz8zsrewxovke5b4p0959wlcbb03jdj17dmj5g5g49qprqpr8czjy6460p3f9jvchgmk346af6f5jnysgfdfupv66bvepcplq45luoymn6et4sy01a07fe1ejylnj6fu72ysgfkas5incglft5vaae28ivtekx4tquvu13v74c9fc7okocr5ow3jpcwdon1fzh0tw8tadg9r0ejuxdvwu4o8qmvgn7y4wstfy2dvkr5neztdci5lwcisl8veoiptchbkaely4q0zuv4n9287kp0wj0n0rpo00v7oteua3wnd7p039dga == \p\m\9\b\q\k\0\u\r\0\l\h\q\m\y\1\c\d\g\a\o\z\5\b\o\m\q\s\o\i\g\m\4\d\c\q\n\3\z\7\j\q\g\z\j\j\z\i\m\b\h\b\6\b\w\j\3\l\d\4\x\4\6\y\t\7\2\t\x\n\0\n\z\0\m\9\7\b\r\6\3\w\n\p\t\w\f\v\u\c\y\f\n\o\8\f\q\6\1\p\4\v\9\2\i\j\y\e\m\7\x\b\w\v\f\6\k\k\v\t\7\t\w\r\n\3\s\m\q\2\g\e\7\h\0\t\b\p\q\m\t\t\5\g\q\o\b\a\r\r\i\k\c\1\n\0\2\9\n\y\f\h\n\t\1\m\z\a\2\z\t\e\a\1\0\h\l\w\9\j\h\y\3\h\4\j\v\s\3\n\m\j\s\6\d\s\y\f\m\7\x\l\7\n\g\z\8\z\s\r\e\w\x\o\v\k\e\5\b\4\p\0\9\5\9\w\l\c\b\b\0\3\j\d\j\1\7\d\m\j\5\g\5\g\4\9\q\p\r\q\p\r\8\c\z\j\y\6\4\6\0\p\3\f\9\j\v\c\h\g\m\k\3\4\6\a\f\6\f\5\j\n\y\s\g\f\d\f\u\p\v\6\6\b\v\e\p\c\p\l\q\4\5\l\u\o\y\m\n\6\e\t\4\s\y\0\1\a\0\7\f\e\1\e\j\y\l\n\j\6\f\u\7\2\y\s\g\f\k\a\s\5\i\n\c\g\l\f\t\5\v\a\a\e\2\8\i\v\t\e\k\x\4\t\q\u\v\u\1\3\v\7\4\c\9\f\c\7\o\k\o\c\r\5\o\w\3\j\p\c\w\d\o\n\1\f\z\h\0\t\w\8\t\a\d\g\9\r\0\e\j\u\x\d\v\w\u\4\o\8\q\m\v\g\n\7\y\4\w\s\t\f\y\2\d\v\k\r\5\n\e\z\t\d\c\i\5\l\w\c\i\s\l\8\v\e\o\i\p\t\c\h\b\k\a\e\l\y\4\q\0\z\u\v\4\n\9\2\8\7\k\p\0\w\j\0\n\0\r\p\o\0\0\v\7\o\t\e\u\a\3\w\n\d\7\p\0\3\9\d\g\a ]] 00:38:43.212 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:43.212 05:29:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:43.212 [2024-12-09 05:29:30.023739] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:43.212 [2024-12-09 05:29:30.023840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60218 ] 00:38:43.212 [2024-12-09 05:29:30.171641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.470 [2024-12-09 05:29:30.203660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.470 [2024-12-09 05:29:30.231690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:43.470  [2024-12-09T05:29:30.442Z] Copying: 512/512 [B] (average 500 kBps) 00:38:43.470 00:38:43.470 05:29:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pm9bqk0ur0lhqmy1cdgaoz5bomqsoigm4dcqn3z7jqgzjjzimbhb6bwj3ld4x46yt72txn0nz0m97br63wnptwfvucyfno8fq61p4v92ijyem7xbwvf6kkvt7twrn3smq2ge7h0tbpqmtt5gqobarrikc1n029nyfhnt1mza2ztea10hlw9jhy3h4jvs3nmjs6dsyfm7xl7ngz8zsrewxovke5b4p0959wlcbb03jdj17dmj5g5g49qprqpr8czjy6460p3f9jvchgmk346af6f5jnysgfdfupv66bvepcplq45luoymn6et4sy01a07fe1ejylnj6fu72ysgfkas5incglft5vaae28ivtekx4tquvu13v74c9fc7okocr5ow3jpcwdon1fzh0tw8tadg9r0ejuxdvwu4o8qmvgn7y4wstfy2dvkr5neztdci5lwcisl8veoiptchbkaely4q0zuv4n9287kp0wj0n0rpo00v7oteua3wnd7p039dga == \p\m\9\b\q\k\0\u\r\0\l\h\q\m\y\1\c\d\g\a\o\z\5\b\o\m\q\s\o\i\g\m\4\d\c\q\n\3\z\7\j\q\g\z\j\j\z\i\m\b\h\b\6\b\w\j\3\l\d\4\x\4\6\y\t\7\2\t\x\n\0\n\z\0\m\9\7\b\r\6\3\w\n\p\t\w\f\v\u\c\y\f\n\o\8\f\q\6\1\p\4\v\9\2\i\j\y\e\m\7\x\b\w\v\f\6\k\k\v\t\7\t\w\r\n\3\s\m\q\2\g\e\7\h\0\t\b\p\q\m\t\t\5\g\q\o\b\a\r\r\i\k\c\1\n\0\2\9\n\y\f\h\n\t\1\m\z\a\2\z\t\e\a\1\0\h\l\w\9\j\h\y\3\h\4\j\v\s\3\n\m\j\s\6\d\s\y\f\m\7\x\l\7\n\g\z\8\z\s\r\e\w\x\o\v\k\e\5\b\4\p\0\9\5\9\w\l\c\b\b\0\3\j\d\j\1\7\d\m\j\5\g\5\g\4\9\q\p\r\q\p\r\8\c\z\j\y\6\4\6\0\p\3\f\9\j\v\c\h\g\m\k\3\4\6\a\f\6\f\5\j\n\y\s\g\f\d\f\u\p\v\6\6\b\v\e\p\c\p\l\q\4\5\l\u\o\y\m\n\6\e\t\4\s\y\0\1\a\0\7\f\e\1\e\j\y\l\n\j\6\f\u\7\2\y\s\g\f\k\a\s\5\i\n\c\g\l\f\t\5\v\a\a\e\2\8\i\v\t\e\k\x\4\t\q\u\v\u\1\3\v\7\4\c\9\f\c\7\o\k\o\c\r\5\o\w\3\j\p\c\w\d\o\n\1\f\z\h\0\t\w\8\t\a\d\g\9\r\0\e\j\u\x\d\v\w\u\4\o\8\q\m\v\g\n\7\y\4\w\s\t\f\y\2\d\v\k\r\5\n\e\z\t\d\c\i\5\l\w\c\i\s\l\8\v\e\o\i\p\t\c\h\b\k\a\e\l\y\4\q\0\z\u\v\4\n\9\2\8\7\k\p\0\w\j\0\n\0\r\p\o\0\0\v\7\o\t\e\u\a\3\w\n\d\7\p\0\3\9\d\g\a ]] 00:38:43.470 05:29:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:43.470 05:29:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:43.728 [2024-12-09 05:29:30.469232] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:43.728 [2024-12-09 05:29:30.469336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:38:43.728 [2024-12-09 05:29:30.615590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.728 [2024-12-09 05:29:30.648194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.728 [2024-12-09 05:29:30.676477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:43.987  [2024-12-09T05:29:30.959Z] Copying: 512/512 [B] (average 125 kBps) 00:38:43.987 00:38:43.987 05:29:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pm9bqk0ur0lhqmy1cdgaoz5bomqsoigm4dcqn3z7jqgzjjzimbhb6bwj3ld4x46yt72txn0nz0m97br63wnptwfvucyfno8fq61p4v92ijyem7xbwvf6kkvt7twrn3smq2ge7h0tbpqmtt5gqobarrikc1n029nyfhnt1mza2ztea10hlw9jhy3h4jvs3nmjs6dsyfm7xl7ngz8zsrewxovke5b4p0959wlcbb03jdj17dmj5g5g49qprqpr8czjy6460p3f9jvchgmk346af6f5jnysgfdfupv66bvepcplq45luoymn6et4sy01a07fe1ejylnj6fu72ysgfkas5incglft5vaae28ivtekx4tquvu13v74c9fc7okocr5ow3jpcwdon1fzh0tw8tadg9r0ejuxdvwu4o8qmvgn7y4wstfy2dvkr5neztdci5lwcisl8veoiptchbkaely4q0zuv4n9287kp0wj0n0rpo00v7oteua3wnd7p039dga == \p\m\9\b\q\k\0\u\r\0\l\h\q\m\y\1\c\d\g\a\o\z\5\b\o\m\q\s\o\i\g\m\4\d\c\q\n\3\z\7\j\q\g\z\j\j\z\i\m\b\h\b\6\b\w\j\3\l\d\4\x\4\6\y\t\7\2\t\x\n\0\n\z\0\m\9\7\b\r\6\3\w\n\p\t\w\f\v\u\c\y\f\n\o\8\f\q\6\1\p\4\v\9\2\i\j\y\e\m\7\x\b\w\v\f\6\k\k\v\t\7\t\w\r\n\3\s\m\q\2\g\e\7\h\0\t\b\p\q\m\t\t\5\g\q\o\b\a\r\r\i\k\c\1\n\0\2\9\n\y\f\h\n\t\1\m\z\a\2\z\t\e\a\1\0\h\l\w\9\j\h\y\3\h\4\j\v\s\3\n\m\j\s\6\d\s\y\f\m\7\x\l\7\n\g\z\8\z\s\r\e\w\x\o\v\k\e\5\b\4\p\0\9\5\9\w\l\c\b\b\0\3\j\d\j\1\7\d\m\j\5\g\5\g\4\9\q\p\r\q\p\r\8\c\z\j\y\6\4\6\0\p\3\f\9\j\v\c\h\g\m\k\3\4\6\a\f\6\f\5\j\n\y\s\g\f\d\f\u\p\v\6\6\b\v\e\p\c\p\l\q\4\5\l\u\o\y\m\n\6\e\t\4\s\y\0\1\a\0\7\f\e\1\e\j\y\l\n\j\6\f\u\7\2\y\s\g\f\k\a\s\5\i\n\c\g\l\f\t\5\v\a\a\e\2\8\i\v\t\e\k\x\4\t\q\u\v\u\1\3\v\7\4\c\9\f\c\7\o\k\o\c\r\5\o\w\3\j\p\c\w\d\o\n\1\f\z\h\0\t\w\8\t\a\d\g\9\r\0\e\j\u\x\d\v\w\u\4\o\8\q\m\v\g\n\7\y\4\w\s\t\f\y\2\d\v\k\r\5\n\e\z\t\d\c\i\5\l\w\c\i\s\l\8\v\e\o\i\p\t\c\h\b\k\a\e\l\y\4\q\0\z\u\v\4\n\9\2\8\7\k\p\0\w\j\0\n\0\r\p\o\0\0\v\7\o\t\e\u\a\3\w\n\d\7\p\0\3\9\d\g\a ]] 00:38:43.987 05:29:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:43.987 05:29:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:43.987 [2024-12-09 05:29:30.919678] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:43.987 [2024-12-09 05:29:30.919770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:38:44.265 [2024-12-09 05:29:31.071505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.265 [2024-12-09 05:29:31.111819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.265 [2024-12-09 05:29:31.145274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:44.265  [2024-12-09T05:29:31.497Z] Copying: 512/512 [B] (average 250 kBps) 00:38:44.525 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pm9bqk0ur0lhqmy1cdgaoz5bomqsoigm4dcqn3z7jqgzjjzimbhb6bwj3ld4x46yt72txn0nz0m97br63wnptwfvucyfno8fq61p4v92ijyem7xbwvf6kkvt7twrn3smq2ge7h0tbpqmtt5gqobarrikc1n029nyfhnt1mza2ztea10hlw9jhy3h4jvs3nmjs6dsyfm7xl7ngz8zsrewxovke5b4p0959wlcbb03jdj17dmj5g5g49qprqpr8czjy6460p3f9jvchgmk346af6f5jnysgfdfupv66bvepcplq45luoymn6et4sy01a07fe1ejylnj6fu72ysgfkas5incglft5vaae28ivtekx4tquvu13v74c9fc7okocr5ow3jpcwdon1fzh0tw8tadg9r0ejuxdvwu4o8qmvgn7y4wstfy2dvkr5neztdci5lwcisl8veoiptchbkaely4q0zuv4n9287kp0wj0n0rpo00v7oteua3wnd7p039dga == \p\m\9\b\q\k\0\u\r\0\l\h\q\m\y\1\c\d\g\a\o\z\5\b\o\m\q\s\o\i\g\m\4\d\c\q\n\3\z\7\j\q\g\z\j\j\z\i\m\b\h\b\6\b\w\j\3\l\d\4\x\4\6\y\t\7\2\t\x\n\0\n\z\0\m\9\7\b\r\6\3\w\n\p\t\w\f\v\u\c\y\f\n\o\8\f\q\6\1\p\4\v\9\2\i\j\y\e\m\7\x\b\w\v\f\6\k\k\v\t\7\t\w\r\n\3\s\m\q\2\g\e\7\h\0\t\b\p\q\m\t\t\5\g\q\o\b\a\r\r\i\k\c\1\n\0\2\9\n\y\f\h\n\t\1\m\z\a\2\z\t\e\a\1\0\h\l\w\9\j\h\y\3\h\4\j\v\s\3\n\m\j\s\6\d\s\y\f\m\7\x\l\7\n\g\z\8\z\s\r\e\w\x\o\v\k\e\5\b\4\p\0\9\5\9\w\l\c\b\b\0\3\j\d\j\1\7\d\m\j\5\g\5\g\4\9\q\p\r\q\p\r\8\c\z\j\y\6\4\6\0\p\3\f\9\j\v\c\h\g\m\k\3\4\6\a\f\6\f\5\j\n\y\s\g\f\d\f\u\p\v\6\6\b\v\e\p\c\p\l\q\4\5\l\u\o\y\m\n\6\e\t\4\s\y\0\1\a\0\7\f\e\1\e\j\y\l\n\j\6\f\u\7\2\y\s\g\f\k\a\s\5\i\n\c\g\l\f\t\5\v\a\a\e\2\8\i\v\t\e\k\x\4\t\q\u\v\u\1\3\v\7\4\c\9\f\c\7\o\k\o\c\r\5\o\w\3\j\p\c\w\d\o\n\1\f\z\h\0\t\w\8\t\a\d\g\9\r\0\e\j\u\x\d\v\w\u\4\o\8\q\m\v\g\n\7\y\4\w\s\t\f\y\2\d\v\k\r\5\n\e\z\t\d\c\i\5\l\w\c\i\s\l\8\v\e\o\i\p\t\c\h\b\k\a\e\l\y\4\q\0\z\u\v\4\n\9\2\8\7\k\p\0\w\j\0\n\0\r\p\o\0\0\v\7\o\t\e\u\a\3\w\n\d\7\p\0\3\9\d\g\a ]] 00:38:44.526 00:38:44.526 real 0m3.685s 00:38:44.526 user 0m2.133s 00:38:44.526 sys 0m1.360s 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:44.526 ************************************ 00:38:44.526 END TEST dd_flags_misc 00:38:44.526 ************************************ 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:38:44.526 * Second test run, disabling liburing, forcing AIO 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:44.526 ************************************ 00:38:44.526 START TEST dd_flag_append_forced_aio 00:38:44.526 ************************************ 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=v1f79vxr2dv3brcbmkvwzvprb2ccpa78 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=hdm4i0tbf8mm877p8btejhpcqzbl9tl1 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s v1f79vxr2dv3brcbmkvwzvprb2ccpa78 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s hdm4i0tbf8mm877p8btejhpcqzbl9tl1 00:38:44.526 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:44.526 [2024-12-09 05:29:31.477125] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:44.526 [2024-12-09 05:29:31.477258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:38:44.784 [2024-12-09 05:29:31.628695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.784 [2024-12-09 05:29:31.662571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.784 [2024-12-09 05:29:31.691306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:44.784  [2024-12-09T05:29:32.014Z] Copying: 32/32 [B] (average 31 kBps) 00:38:45.042 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ hdm4i0tbf8mm877p8btejhpcqzbl9tl1v1f79vxr2dv3brcbmkvwzvprb2ccpa78 == \h\d\m\4\i\0\t\b\f\8\m\m\8\7\7\p\8\b\t\e\j\h\p\c\q\z\b\l\9\t\l\1\v\1\f\7\9\v\x\r\2\d\v\3\b\r\c\b\m\k\v\w\z\v\p\r\b\2\c\c\p\a\7\8 ]] 00:38:45.042 00:38:45.042 real 0m0.506s 00:38:45.042 user 0m0.296s 00:38:45.042 sys 0m0.088s 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:45.042 ************************************ 00:38:45.042 END TEST dd_flag_append_forced_aio 00:38:45.042 ************************************ 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:45.042 ************************************ 00:38:45.042 START TEST dd_flag_directory_forced_aio 00:38:45.042 ************************************ 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:45.042 05:29:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:45.301 [2024-12-09 05:29:32.018270] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:45.301 [2024-12-09 05:29:32.018399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60292 ] 00:38:45.301 [2024-12-09 05:29:32.167884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.301 [2024-12-09 05:29:32.201018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.301 [2024-12-09 05:29:32.232053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:45.301 [2024-12-09 05:29:32.252794] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:45.301 [2024-12-09 05:29:32.252859] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:45.301 [2024-12-09 05:29:32.252878] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:45.559 [2024-12-09 05:29:32.320363] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:45.559 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:45.559 [2024-12-09 05:29:32.483837] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:45.559 [2024-12-09 05:29:32.483948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 00:38:45.836 [2024-12-09 05:29:32.637825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.836 [2024-12-09 05:29:32.671526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.837 [2024-12-09 05:29:32.701313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:45.837 [2024-12-09 05:29:32.720997] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:45.837 [2024-12-09 05:29:32.721055] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:45.837 [2024-12-09 05:29:32.721074] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:45.837 [2024-12-09 05:29:32.787454] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:46.095 00:38:46.095 real 0m0.928s 00:38:46.095 user 0m0.529s 00:38:46.095 sys 0m0.190s 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:46.095 ************************************ 00:38:46.095 END TEST dd_flag_directory_forced_aio 00:38:46.095 ************************************ 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:46.095 ************************************ 00:38:46.095 START TEST dd_flag_nofollow_forced_aio 00:38:46.095 ************************************ 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:46.095 05:29:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:46.095 [2024-12-09 05:29:33.009258] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:46.095 [2024-12-09 05:29:33.009355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60330 ] 00:38:46.353 [2024-12-09 05:29:33.165557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.353 [2024-12-09 05:29:33.205255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.353 [2024-12-09 05:29:33.238831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:46.353 [2024-12-09 05:29:33.261287] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:46.353 [2024-12-09 05:29:33.261351] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:46.353 [2024-12-09 05:29:33.261376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:46.611 [2024-12-09 05:29:33.333673] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:46.611 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:46.611 [2024-12-09 05:29:33.492785] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:46.611 [2024-12-09 05:29:33.492885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60345 ] 00:38:46.868 [2024-12-09 05:29:33.643630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.868 [2024-12-09 05:29:33.683049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.868 [2024-12-09 05:29:33.716106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:46.868 [2024-12-09 05:29:33.737975] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:46.868 [2024-12-09 05:29:33.738036] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:46.868 [2024-12-09 05:29:33.738059] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:46.869 [2024-12-09 05:29:33.808045] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:47.126 05:29:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:47.126 [2024-12-09 05:29:33.998817] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:47.126 [2024-12-09 05:29:33.998919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60347 ] 00:38:47.384 [2024-12-09 05:29:34.151228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.384 [2024-12-09 05:29:34.185153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.384 [2024-12-09 05:29:34.215602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:47.384  [2024-12-09T05:29:34.614Z] Copying: 512/512 [B] (average 500 kBps) 00:38:47.642 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 7oezdy32hqs4mxphnkyt2ivcq1p4v7z01s29347rw8qwct9v10e2e0iagh0srvxrhqjhdokj0un2knndptcyeu0pjsn50heuazepjw076yo431ntmzrot22k83123yligjy52gzmr8e4mif3qb2qivquylde0gpbl6sffii7hqmuuhf2qdju8wid6r6iyi9err7uq1bd6sb29c793ht73p55xcqkumvdt8vlv72er0tpmrziti0b2sd1w5j835h3ef4f3sokdss56lvrieg4yh6ew5iw7ufpuhghcx3hsz21uampu409z8gwsuu5w4sf5gri9ab0a2o11nakyr81ogh4rn10kkutoj6rjb2ev00aspolzw3kqs3pabee2mahwu6s8b0qac68w2x8dfyr3gizy6nqcwy5bsy9w1sscr3vo1735kvgupioi7iivk2fz5ljfkose2h0xdc19rs2sfl4wcn4wmsumwbym68s4tthj0e9g8hi3pj9ccdasawu == \7\o\e\z\d\y\3\2\h\q\s\4\m\x\p\h\n\k\y\t\2\i\v\c\q\1\p\4\v\7\z\0\1\s\2\9\3\4\7\r\w\8\q\w\c\t\9\v\1\0\e\2\e\0\i\a\g\h\0\s\r\v\x\r\h\q\j\h\d\o\k\j\0\u\n\2\k\n\n\d\p\t\c\y\e\u\0\p\j\s\n\5\0\h\e\u\a\z\e\p\j\w\0\7\6\y\o\4\3\1\n\t\m\z\r\o\t\2\2\k\8\3\1\2\3\y\l\i\g\j\y\5\2\g\z\m\r\8\e\4\m\i\f\3\q\b\2\q\i\v\q\u\y\l\d\e\0\g\p\b\l\6\s\f\f\i\i\7\h\q\m\u\u\h\f\2\q\d\j\u\8\w\i\d\6\r\6\i\y\i\9\e\r\r\7\u\q\1\b\d\6\s\b\2\9\c\7\9\3\h\t\7\3\p\5\5\x\c\q\k\u\m\v\d\t\8\v\l\v\7\2\e\r\0\t\p\m\r\z\i\t\i\0\b\2\s\d\1\w\5\j\8\3\5\h\3\e\f\4\f\3\s\o\k\d\s\s\5\6\l\v\r\i\e\g\4\y\h\6\e\w\5\i\w\7\u\f\p\u\h\g\h\c\x\3\h\s\z\2\1\u\a\m\p\u\4\0\9\z\8\g\w\s\u\u\5\w\4\s\f\5\g\r\i\9\a\b\0\a\2\o\1\1\n\a\k\y\r\8\1\o\g\h\4\r\n\1\0\k\k\u\t\o\j\6\r\j\b\2\e\v\0\0\a\s\p\o\l\z\w\3\k\q\s\3\p\a\b\e\e\2\m\a\h\w\u\6\s\8\b\0\q\a\c\6\8\w\2\x\8\d\f\y\r\3\g\i\z\y\6\n\q\c\w\y\5\b\s\y\9\w\1\s\s\c\r\3\v\o\1\7\3\5\k\v\g\u\p\i\o\i\7\i\i\v\k\2\f\z\5\l\j\f\k\o\s\e\2\h\0\x\d\c\1\9\r\s\2\s\f\l\4\w\c\n\4\w\m\s\u\m\w\b\y\m\6\8\s\4\t\t\h\j\0\e\9\g\8\h\i\3\p\j\9\c\c\d\a\s\a\w\u ]] 00:38:47.642 00:38:47.642 real 0m1.509s 00:38:47.642 user 0m0.886s 00:38:47.642 sys 0m0.292s 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:47.642 ************************************ 00:38:47.642 END TEST dd_flag_nofollow_forced_aio 00:38:47.642 ************************************ 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:47.642 ************************************ 00:38:47.642 START TEST dd_flag_noatime_forced_aio 00:38:47.642 ************************************ 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733722174 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733722174 00:38:47.642 05:29:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:38:48.580 05:29:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:48.838 [2024-12-09 05:29:35.580958] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:48.838 [2024-12-09 05:29:35.581056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:38:48.838 [2024-12-09 05:29:35.734055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.838 [2024-12-09 05:29:35.774607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.095 [2024-12-09 05:29:35.808496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:49.095  [2024-12-09T05:29:36.067Z] Copying: 512/512 [B] (average 500 kBps) 00:38:49.095 00:38:49.095 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:49.095 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733722174 )) 00:38:49.095 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:49.095 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733722174 )) 00:38:49.095 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:49.352 [2024-12-09 05:29:36.112607] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:49.353 [2024-12-09 05:29:36.112709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:38:49.353 [2024-12-09 05:29:36.267177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.353 [2024-12-09 05:29:36.307415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.609 [2024-12-09 05:29:36.341084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:49.609  [2024-12-09T05:29:36.839Z] Copying: 512/512 [B] (average 500 kBps) 00:38:49.867 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733722176 )) 00:38:49.867 00:38:49.867 real 0m2.102s 00:38:49.867 user 0m0.619s 00:38:49.867 sys 0m0.229s 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:49.867 ************************************ 00:38:49.867 END TEST dd_flag_noatime_forced_aio 00:38:49.867 ************************************ 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:49.867 ************************************ 00:38:49.867 START TEST dd_flags_misc_forced_aio 00:38:49.867 ************************************ 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:49.867 05:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:49.867 [2024-12-09 05:29:36.725075] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:49.867 [2024-12-09 05:29:36.725173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:38:50.124 [2024-12-09 05:29:36.881398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.124 [2024-12-09 05:29:36.927501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.125 [2024-12-09 05:29:36.960034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:50.125  [2024-12-09T05:29:37.354Z] Copying: 512/512 [B] (average 500 kBps) 00:38:50.382 00:38:50.382 05:29:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jrhr3qykdtlz291iwrwkj6bbnkroc6cb1bybrpye851eeffzj6p7fgcgjpgi7tw8xvmnddgws1u8j7aaxz0yxe20enw14kaq7gj5tw7x6dm5ji10zrdwt6p5wlkli245pg4ai77sk1wwu5bcau2g57w18y3ozms3qvej3xwtk1dnbers2k99x9l6492utvsr0429rbsya0rglsaav8vdty0ogvhydjx73kd08q7jc9saf012h371o3lis8cs7b4nk0plekrow99kqo95hfjjtpibpznphme0748ymqsu8ipzn975qec36ez6jp2ttxbg0ceo4josocm1n0k9pee9t458xuruwqz6qdjcu57awkr493ybk1fwtvgod5p21h2ilx8chetdk5f48fhudkwpfozbf3ke7f08eqa1j7q6wfjgd0xwgsk70wx4glkcbbau1hk53yjl1bi1rlukbziw0p7ohqtbkfk1nprwoyvjlpu76r2jj0tztrutcz7wp5k1 == \j\r\h\r\3\q\y\k\d\t\l\z\2\9\1\i\w\r\w\k\j\6\b\b\n\k\r\o\c\6\c\b\1\b\y\b\r\p\y\e\8\5\1\e\e\f\f\z\j\6\p\7\f\g\c\g\j\p\g\i\7\t\w\8\x\v\m\n\d\d\g\w\s\1\u\8\j\7\a\a\x\z\0\y\x\e\2\0\e\n\w\1\4\k\a\q\7\g\j\5\t\w\7\x\6\d\m\5\j\i\1\0\z\r\d\w\t\6\p\5\w\l\k\l\i\2\4\5\p\g\4\a\i\7\7\s\k\1\w\w\u\5\b\c\a\u\2\g\5\7\w\1\8\y\3\o\z\m\s\3\q\v\e\j\3\x\w\t\k\1\d\n\b\e\r\s\2\k\9\9\x\9\l\6\4\9\2\u\t\v\s\r\0\4\2\9\r\b\s\y\a\0\r\g\l\s\a\a\v\8\v\d\t\y\0\o\g\v\h\y\d\j\x\7\3\k\d\0\8\q\7\j\c\9\s\a\f\0\1\2\h\3\7\1\o\3\l\i\s\8\c\s\7\b\4\n\k\0\p\l\e\k\r\o\w\9\9\k\q\o\9\5\h\f\j\j\t\p\i\b\p\z\n\p\h\m\e\0\7\4\8\y\m\q\s\u\8\i\p\z\n\9\7\5\q\e\c\3\6\e\z\6\j\p\2\t\t\x\b\g\0\c\e\o\4\j\o\s\o\c\m\1\n\0\k\9\p\e\e\9\t\4\5\8\x\u\r\u\w\q\z\6\q\d\j\c\u\5\7\a\w\k\r\4\9\3\y\b\k\1\f\w\t\v\g\o\d\5\p\2\1\h\2\i\l\x\8\c\h\e\t\d\k\5\f\4\8\f\h\u\d\k\w\p\f\o\z\b\f\3\k\e\7\f\0\8\e\q\a\1\j\7\q\6\w\f\j\g\d\0\x\w\g\s\k\7\0\w\x\4\g\l\k\c\b\b\a\u\1\h\k\5\3\y\j\l\1\b\i\1\r\l\u\k\b\z\i\w\0\p\7\o\h\q\t\b\k\f\k\1\n\p\r\w\o\y\v\j\l\p\u\7\6\r\2\j\j\0\t\z\t\r\u\t\c\z\7\w\p\5\k\1 ]] 00:38:50.382 05:29:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:50.382 05:29:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:50.382 [2024-12-09 05:29:37.254124] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:50.383 [2024-12-09 05:29:37.254239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60441 ] 00:38:50.640 [2024-12-09 05:29:37.410533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.640 [2024-12-09 05:29:37.449482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.640 [2024-12-09 05:29:37.481634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:50.640  [2024-12-09T05:29:37.870Z] Copying: 512/512 [B] (average 500 kBps) 00:38:50.898 00:38:50.898 05:29:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jrhr3qykdtlz291iwrwkj6bbnkroc6cb1bybrpye851eeffzj6p7fgcgjpgi7tw8xvmnddgws1u8j7aaxz0yxe20enw14kaq7gj5tw7x6dm5ji10zrdwt6p5wlkli245pg4ai77sk1wwu5bcau2g57w18y3ozms3qvej3xwtk1dnbers2k99x9l6492utvsr0429rbsya0rglsaav8vdty0ogvhydjx73kd08q7jc9saf012h371o3lis8cs7b4nk0plekrow99kqo95hfjjtpibpznphme0748ymqsu8ipzn975qec36ez6jp2ttxbg0ceo4josocm1n0k9pee9t458xuruwqz6qdjcu57awkr493ybk1fwtvgod5p21h2ilx8chetdk5f48fhudkwpfozbf3ke7f08eqa1j7q6wfjgd0xwgsk70wx4glkcbbau1hk53yjl1bi1rlukbziw0p7ohqtbkfk1nprwoyvjlpu76r2jj0tztrutcz7wp5k1 == \j\r\h\r\3\q\y\k\d\t\l\z\2\9\1\i\w\r\w\k\j\6\b\b\n\k\r\o\c\6\c\b\1\b\y\b\r\p\y\e\8\5\1\e\e\f\f\z\j\6\p\7\f\g\c\g\j\p\g\i\7\t\w\8\x\v\m\n\d\d\g\w\s\1\u\8\j\7\a\a\x\z\0\y\x\e\2\0\e\n\w\1\4\k\a\q\7\g\j\5\t\w\7\x\6\d\m\5\j\i\1\0\z\r\d\w\t\6\p\5\w\l\k\l\i\2\4\5\p\g\4\a\i\7\7\s\k\1\w\w\u\5\b\c\a\u\2\g\5\7\w\1\8\y\3\o\z\m\s\3\q\v\e\j\3\x\w\t\k\1\d\n\b\e\r\s\2\k\9\9\x\9\l\6\4\9\2\u\t\v\s\r\0\4\2\9\r\b\s\y\a\0\r\g\l\s\a\a\v\8\v\d\t\y\0\o\g\v\h\y\d\j\x\7\3\k\d\0\8\q\7\j\c\9\s\a\f\0\1\2\h\3\7\1\o\3\l\i\s\8\c\s\7\b\4\n\k\0\p\l\e\k\r\o\w\9\9\k\q\o\9\5\h\f\j\j\t\p\i\b\p\z\n\p\h\m\e\0\7\4\8\y\m\q\s\u\8\i\p\z\n\9\7\5\q\e\c\3\6\e\z\6\j\p\2\t\t\x\b\g\0\c\e\o\4\j\o\s\o\c\m\1\n\0\k\9\p\e\e\9\t\4\5\8\x\u\r\u\w\q\z\6\q\d\j\c\u\5\7\a\w\k\r\4\9\3\y\b\k\1\f\w\t\v\g\o\d\5\p\2\1\h\2\i\l\x\8\c\h\e\t\d\k\5\f\4\8\f\h\u\d\k\w\p\f\o\z\b\f\3\k\e\7\f\0\8\e\q\a\1\j\7\q\6\w\f\j\g\d\0\x\w\g\s\k\7\0\w\x\4\g\l\k\c\b\b\a\u\1\h\k\5\3\y\j\l\1\b\i\1\r\l\u\k\b\z\i\w\0\p\7\o\h\q\t\b\k\f\k\1\n\p\r\w\o\y\v\j\l\p\u\7\6\r\2\j\j\0\t\z\t\r\u\t\c\z\7\w\p\5\k\1 ]] 00:38:50.898 05:29:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:50.898 05:29:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:50.898 [2024-12-09 05:29:37.764077] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:50.898 [2024-12-09 05:29:37.764238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:38:51.156 [2024-12-09 05:29:37.919768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.156 [2024-12-09 05:29:37.950668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.156 [2024-12-09 05:29:37.978539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:51.156  [2024-12-09T05:29:38.388Z] Copying: 512/512 [B] (average 100 kBps) 00:38:51.416 00:38:51.416 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jrhr3qykdtlz291iwrwkj6bbnkroc6cb1bybrpye851eeffzj6p7fgcgjpgi7tw8xvmnddgws1u8j7aaxz0yxe20enw14kaq7gj5tw7x6dm5ji10zrdwt6p5wlkli245pg4ai77sk1wwu5bcau2g57w18y3ozms3qvej3xwtk1dnbers2k99x9l6492utvsr0429rbsya0rglsaav8vdty0ogvhydjx73kd08q7jc9saf012h371o3lis8cs7b4nk0plekrow99kqo95hfjjtpibpznphme0748ymqsu8ipzn975qec36ez6jp2ttxbg0ceo4josocm1n0k9pee9t458xuruwqz6qdjcu57awkr493ybk1fwtvgod5p21h2ilx8chetdk5f48fhudkwpfozbf3ke7f08eqa1j7q6wfjgd0xwgsk70wx4glkcbbau1hk53yjl1bi1rlukbziw0p7ohqtbkfk1nprwoyvjlpu76r2jj0tztrutcz7wp5k1 == \j\r\h\r\3\q\y\k\d\t\l\z\2\9\1\i\w\r\w\k\j\6\b\b\n\k\r\o\c\6\c\b\1\b\y\b\r\p\y\e\8\5\1\e\e\f\f\z\j\6\p\7\f\g\c\g\j\p\g\i\7\t\w\8\x\v\m\n\d\d\g\w\s\1\u\8\j\7\a\a\x\z\0\y\x\e\2\0\e\n\w\1\4\k\a\q\7\g\j\5\t\w\7\x\6\d\m\5\j\i\1\0\z\r\d\w\t\6\p\5\w\l\k\l\i\2\4\5\p\g\4\a\i\7\7\s\k\1\w\w\u\5\b\c\a\u\2\g\5\7\w\1\8\y\3\o\z\m\s\3\q\v\e\j\3\x\w\t\k\1\d\n\b\e\r\s\2\k\9\9\x\9\l\6\4\9\2\u\t\v\s\r\0\4\2\9\r\b\s\y\a\0\r\g\l\s\a\a\v\8\v\d\t\y\0\o\g\v\h\y\d\j\x\7\3\k\d\0\8\q\7\j\c\9\s\a\f\0\1\2\h\3\7\1\o\3\l\i\s\8\c\s\7\b\4\n\k\0\p\l\e\k\r\o\w\9\9\k\q\o\9\5\h\f\j\j\t\p\i\b\p\z\n\p\h\m\e\0\7\4\8\y\m\q\s\u\8\i\p\z\n\9\7\5\q\e\c\3\6\e\z\6\j\p\2\t\t\x\b\g\0\c\e\o\4\j\o\s\o\c\m\1\n\0\k\9\p\e\e\9\t\4\5\8\x\u\r\u\w\q\z\6\q\d\j\c\u\5\7\a\w\k\r\4\9\3\y\b\k\1\f\w\t\v\g\o\d\5\p\2\1\h\2\i\l\x\8\c\h\e\t\d\k\5\f\4\8\f\h\u\d\k\w\p\f\o\z\b\f\3\k\e\7\f\0\8\e\q\a\1\j\7\q\6\w\f\j\g\d\0\x\w\g\s\k\7\0\w\x\4\g\l\k\c\b\b\a\u\1\h\k\5\3\y\j\l\1\b\i\1\r\l\u\k\b\z\i\w\0\p\7\o\h\q\t\b\k\f\k\1\n\p\r\w\o\y\v\j\l\p\u\7\6\r\2\j\j\0\t\z\t\r\u\t\c\z\7\w\p\5\k\1 ]] 00:38:51.416 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:51.416 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:51.416 [2024-12-09 05:29:38.245546] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:51.416 [2024-12-09 05:29:38.245661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:38:51.676 [2024-12-09 05:29:38.391901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.676 [2024-12-09 05:29:38.422656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.676 [2024-12-09 05:29:38.450375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:51.676  [2024-12-09T05:29:38.907Z] Copying: 512/512 [B] (average 500 kBps) 00:38:51.935 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jrhr3qykdtlz291iwrwkj6bbnkroc6cb1bybrpye851eeffzj6p7fgcgjpgi7tw8xvmnddgws1u8j7aaxz0yxe20enw14kaq7gj5tw7x6dm5ji10zrdwt6p5wlkli245pg4ai77sk1wwu5bcau2g57w18y3ozms3qvej3xwtk1dnbers2k99x9l6492utvsr0429rbsya0rglsaav8vdty0ogvhydjx73kd08q7jc9saf012h371o3lis8cs7b4nk0plekrow99kqo95hfjjtpibpznphme0748ymqsu8ipzn975qec36ez6jp2ttxbg0ceo4josocm1n0k9pee9t458xuruwqz6qdjcu57awkr493ybk1fwtvgod5p21h2ilx8chetdk5f48fhudkwpfozbf3ke7f08eqa1j7q6wfjgd0xwgsk70wx4glkcbbau1hk53yjl1bi1rlukbziw0p7ohqtbkfk1nprwoyvjlpu76r2jj0tztrutcz7wp5k1 == \j\r\h\r\3\q\y\k\d\t\l\z\2\9\1\i\w\r\w\k\j\6\b\b\n\k\r\o\c\6\c\b\1\b\y\b\r\p\y\e\8\5\1\e\e\f\f\z\j\6\p\7\f\g\c\g\j\p\g\i\7\t\w\8\x\v\m\n\d\d\g\w\s\1\u\8\j\7\a\a\x\z\0\y\x\e\2\0\e\n\w\1\4\k\a\q\7\g\j\5\t\w\7\x\6\d\m\5\j\i\1\0\z\r\d\w\t\6\p\5\w\l\k\l\i\2\4\5\p\g\4\a\i\7\7\s\k\1\w\w\u\5\b\c\a\u\2\g\5\7\w\1\8\y\3\o\z\m\s\3\q\v\e\j\3\x\w\t\k\1\d\n\b\e\r\s\2\k\9\9\x\9\l\6\4\9\2\u\t\v\s\r\0\4\2\9\r\b\s\y\a\0\r\g\l\s\a\a\v\8\v\d\t\y\0\o\g\v\h\y\d\j\x\7\3\k\d\0\8\q\7\j\c\9\s\a\f\0\1\2\h\3\7\1\o\3\l\i\s\8\c\s\7\b\4\n\k\0\p\l\e\k\r\o\w\9\9\k\q\o\9\5\h\f\j\j\t\p\i\b\p\z\n\p\h\m\e\0\7\4\8\y\m\q\s\u\8\i\p\z\n\9\7\5\q\e\c\3\6\e\z\6\j\p\2\t\t\x\b\g\0\c\e\o\4\j\o\s\o\c\m\1\n\0\k\9\p\e\e\9\t\4\5\8\x\u\r\u\w\q\z\6\q\d\j\c\u\5\7\a\w\k\r\4\9\3\y\b\k\1\f\w\t\v\g\o\d\5\p\2\1\h\2\i\l\x\8\c\h\e\t\d\k\5\f\4\8\f\h\u\d\k\w\p\f\o\z\b\f\3\k\e\7\f\0\8\e\q\a\1\j\7\q\6\w\f\j\g\d\0\x\w\g\s\k\7\0\w\x\4\g\l\k\c\b\b\a\u\1\h\k\5\3\y\j\l\1\b\i\1\r\l\u\k\b\z\i\w\0\p\7\o\h\q\t\b\k\f\k\1\n\p\r\w\o\y\v\j\l\p\u\7\6\r\2\j\j\0\t\z\t\r\u\t\c\z\7\w\p\5\k\1 ]] 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:51.935 05:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:51.935 [2024-12-09 05:29:38.736986] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:51.935 [2024-12-09 05:29:38.737107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60463 ] 00:38:51.935 [2024-12-09 05:29:38.888119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.193 [2024-12-09 05:29:38.919743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.193 [2024-12-09 05:29:38.948762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:52.193  [2024-12-09T05:29:39.424Z] Copying: 512/512 [B] (average 500 kBps) 00:38:52.452 00:38:52.452 05:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bxa6eikbgxybrtczzp53ze73xszzzvxxk2tih5agy3btor2dqqt6uwosc06cd1ewjtylxivnukdhy2c69v2suflbx8eqm41z7pb3erpc5o3u328vmmrhxz0i767109dokthp89tlzowlbsgbetbufnqtdxllgjhjx3vmjsa5ngwo0vismwjz65j520nkpqlbry898zvdnrvw2s66mxsl889iqjexo4hqs6p6bbau1e5zaxlaojzz76rzdkietdr6kq25ku3wv7ozgkn55uashbes6dxxyjotuqrs1mpu568hiu3rnpcnq58epv3cm108kj97f2e3gag8huxvtgnwnppl47in6jo1ullrqp1ggexcqfbn4ymff7uj4gwuxeku8v3e2xgpzhqfvbpwcj0osh20mfb4isvei3hm79rzs053k22fjvj1slk7qqs1kc8dcjatpr8s4iusa2l828i4puu5pdr6cs5vjsmp85u53ytquv1kvcbi20fnismg2njo == \b\x\a\6\e\i\k\b\g\x\y\b\r\t\c\z\z\p\5\3\z\e\7\3\x\s\z\z\z\v\x\x\k\2\t\i\h\5\a\g\y\3\b\t\o\r\2\d\q\q\t\6\u\w\o\s\c\0\6\c\d\1\e\w\j\t\y\l\x\i\v\n\u\k\d\h\y\2\c\6\9\v\2\s\u\f\l\b\x\8\e\q\m\4\1\z\7\p\b\3\e\r\p\c\5\o\3\u\3\2\8\v\m\m\r\h\x\z\0\i\7\6\7\1\0\9\d\o\k\t\h\p\8\9\t\l\z\o\w\l\b\s\g\b\e\t\b\u\f\n\q\t\d\x\l\l\g\j\h\j\x\3\v\m\j\s\a\5\n\g\w\o\0\v\i\s\m\w\j\z\6\5\j\5\2\0\n\k\p\q\l\b\r\y\8\9\8\z\v\d\n\r\v\w\2\s\6\6\m\x\s\l\8\8\9\i\q\j\e\x\o\4\h\q\s\6\p\6\b\b\a\u\1\e\5\z\a\x\l\a\o\j\z\z\7\6\r\z\d\k\i\e\t\d\r\6\k\q\2\5\k\u\3\w\v\7\o\z\g\k\n\5\5\u\a\s\h\b\e\s\6\d\x\x\y\j\o\t\u\q\r\s\1\m\p\u\5\6\8\h\i\u\3\r\n\p\c\n\q\5\8\e\p\v\3\c\m\1\0\8\k\j\9\7\f\2\e\3\g\a\g\8\h\u\x\v\t\g\n\w\n\p\p\l\4\7\i\n\6\j\o\1\u\l\l\r\q\p\1\g\g\e\x\c\q\f\b\n\4\y\m\f\f\7\u\j\4\g\w\u\x\e\k\u\8\v\3\e\2\x\g\p\z\h\q\f\v\b\p\w\c\j\0\o\s\h\2\0\m\f\b\4\i\s\v\e\i\3\h\m\7\9\r\z\s\0\5\3\k\2\2\f\j\v\j\1\s\l\k\7\q\q\s\1\k\c\8\d\c\j\a\t\p\r\8\s\4\i\u\s\a\2\l\8\2\8\i\4\p\u\u\5\p\d\r\6\c\s\5\v\j\s\m\p\8\5\u\5\3\y\t\q\u\v\1\k\v\c\b\i\2\0\f\n\i\s\m\g\2\n\j\o ]] 00:38:52.452 05:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:52.452 05:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:52.452 [2024-12-09 05:29:39.225165] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:52.452 [2024-12-09 05:29:39.225260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60465 ] 00:38:52.452 [2024-12-09 05:29:39.376994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.452 [2024-12-09 05:29:39.409446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.711 [2024-12-09 05:29:39.438474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:52.711  [2024-12-09T05:29:39.683Z] Copying: 512/512 [B] (average 500 kBps) 00:38:52.711 00:38:52.711 05:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bxa6eikbgxybrtczzp53ze73xszzzvxxk2tih5agy3btor2dqqt6uwosc06cd1ewjtylxivnukdhy2c69v2suflbx8eqm41z7pb3erpc5o3u328vmmrhxz0i767109dokthp89tlzowlbsgbetbufnqtdxllgjhjx3vmjsa5ngwo0vismwjz65j520nkpqlbry898zvdnrvw2s66mxsl889iqjexo4hqs6p6bbau1e5zaxlaojzz76rzdkietdr6kq25ku3wv7ozgkn55uashbes6dxxyjotuqrs1mpu568hiu3rnpcnq58epv3cm108kj97f2e3gag8huxvtgnwnppl47in6jo1ullrqp1ggexcqfbn4ymff7uj4gwuxeku8v3e2xgpzhqfvbpwcj0osh20mfb4isvei3hm79rzs053k22fjvj1slk7qqs1kc8dcjatpr8s4iusa2l828i4puu5pdr6cs5vjsmp85u53ytquv1kvcbi20fnismg2njo == \b\x\a\6\e\i\k\b\g\x\y\b\r\t\c\z\z\p\5\3\z\e\7\3\x\s\z\z\z\v\x\x\k\2\t\i\h\5\a\g\y\3\b\t\o\r\2\d\q\q\t\6\u\w\o\s\c\0\6\c\d\1\e\w\j\t\y\l\x\i\v\n\u\k\d\h\y\2\c\6\9\v\2\s\u\f\l\b\x\8\e\q\m\4\1\z\7\p\b\3\e\r\p\c\5\o\3\u\3\2\8\v\m\m\r\h\x\z\0\i\7\6\7\1\0\9\d\o\k\t\h\p\8\9\t\l\z\o\w\l\b\s\g\b\e\t\b\u\f\n\q\t\d\x\l\l\g\j\h\j\x\3\v\m\j\s\a\5\n\g\w\o\0\v\i\s\m\w\j\z\6\5\j\5\2\0\n\k\p\q\l\b\r\y\8\9\8\z\v\d\n\r\v\w\2\s\6\6\m\x\s\l\8\8\9\i\q\j\e\x\o\4\h\q\s\6\p\6\b\b\a\u\1\e\5\z\a\x\l\a\o\j\z\z\7\6\r\z\d\k\i\e\t\d\r\6\k\q\2\5\k\u\3\w\v\7\o\z\g\k\n\5\5\u\a\s\h\b\e\s\6\d\x\x\y\j\o\t\u\q\r\s\1\m\p\u\5\6\8\h\i\u\3\r\n\p\c\n\q\5\8\e\p\v\3\c\m\1\0\8\k\j\9\7\f\2\e\3\g\a\g\8\h\u\x\v\t\g\n\w\n\p\p\l\4\7\i\n\6\j\o\1\u\l\l\r\q\p\1\g\g\e\x\c\q\f\b\n\4\y\m\f\f\7\u\j\4\g\w\u\x\e\k\u\8\v\3\e\2\x\g\p\z\h\q\f\v\b\p\w\c\j\0\o\s\h\2\0\m\f\b\4\i\s\v\e\i\3\h\m\7\9\r\z\s\0\5\3\k\2\2\f\j\v\j\1\s\l\k\7\q\q\s\1\k\c\8\d\c\j\a\t\p\r\8\s\4\i\u\s\a\2\l\8\2\8\i\4\p\u\u\5\p\d\r\6\c\s\5\v\j\s\m\p\8\5\u\5\3\y\t\q\u\v\1\k\v\c\b\i\2\0\f\n\i\s\m\g\2\n\j\o ]] 00:38:52.711 05:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:52.711 05:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:52.970 [2024-12-09 05:29:39.714485] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:52.970 [2024-12-09 05:29:39.714618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:38:52.970 [2024-12-09 05:29:39.867592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.970 [2024-12-09 05:29:39.899276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.970 [2024-12-09 05:29:39.927532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:53.228  [2024-12-09T05:29:40.201Z] Copying: 512/512 [B] (average 500 kBps) 00:38:53.229 00:38:53.229 05:29:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bxa6eikbgxybrtczzp53ze73xszzzvxxk2tih5agy3btor2dqqt6uwosc06cd1ewjtylxivnukdhy2c69v2suflbx8eqm41z7pb3erpc5o3u328vmmrhxz0i767109dokthp89tlzowlbsgbetbufnqtdxllgjhjx3vmjsa5ngwo0vismwjz65j520nkpqlbry898zvdnrvw2s66mxsl889iqjexo4hqs6p6bbau1e5zaxlaojzz76rzdkietdr6kq25ku3wv7ozgkn55uashbes6dxxyjotuqrs1mpu568hiu3rnpcnq58epv3cm108kj97f2e3gag8huxvtgnwnppl47in6jo1ullrqp1ggexcqfbn4ymff7uj4gwuxeku8v3e2xgpzhqfvbpwcj0osh20mfb4isvei3hm79rzs053k22fjvj1slk7qqs1kc8dcjatpr8s4iusa2l828i4puu5pdr6cs5vjsmp85u53ytquv1kvcbi20fnismg2njo == \b\x\a\6\e\i\k\b\g\x\y\b\r\t\c\z\z\p\5\3\z\e\7\3\x\s\z\z\z\v\x\x\k\2\t\i\h\5\a\g\y\3\b\t\o\r\2\d\q\q\t\6\u\w\o\s\c\0\6\c\d\1\e\w\j\t\y\l\x\i\v\n\u\k\d\h\y\2\c\6\9\v\2\s\u\f\l\b\x\8\e\q\m\4\1\z\7\p\b\3\e\r\p\c\5\o\3\u\3\2\8\v\m\m\r\h\x\z\0\i\7\6\7\1\0\9\d\o\k\t\h\p\8\9\t\l\z\o\w\l\b\s\g\b\e\t\b\u\f\n\q\t\d\x\l\l\g\j\h\j\x\3\v\m\j\s\a\5\n\g\w\o\0\v\i\s\m\w\j\z\6\5\j\5\2\0\n\k\p\q\l\b\r\y\8\9\8\z\v\d\n\r\v\w\2\s\6\6\m\x\s\l\8\8\9\i\q\j\e\x\o\4\h\q\s\6\p\6\b\b\a\u\1\e\5\z\a\x\l\a\o\j\z\z\7\6\r\z\d\k\i\e\t\d\r\6\k\q\2\5\k\u\3\w\v\7\o\z\g\k\n\5\5\u\a\s\h\b\e\s\6\d\x\x\y\j\o\t\u\q\r\s\1\m\p\u\5\6\8\h\i\u\3\r\n\p\c\n\q\5\8\e\p\v\3\c\m\1\0\8\k\j\9\7\f\2\e\3\g\a\g\8\h\u\x\v\t\g\n\w\n\p\p\l\4\7\i\n\6\j\o\1\u\l\l\r\q\p\1\g\g\e\x\c\q\f\b\n\4\y\m\f\f\7\u\j\4\g\w\u\x\e\k\u\8\v\3\e\2\x\g\p\z\h\q\f\v\b\p\w\c\j\0\o\s\h\2\0\m\f\b\4\i\s\v\e\i\3\h\m\7\9\r\z\s\0\5\3\k\2\2\f\j\v\j\1\s\l\k\7\q\q\s\1\k\c\8\d\c\j\a\t\p\r\8\s\4\i\u\s\a\2\l\8\2\8\i\4\p\u\u\5\p\d\r\6\c\s\5\v\j\s\m\p\8\5\u\5\3\y\t\q\u\v\1\k\v\c\b\i\2\0\f\n\i\s\m\g\2\n\j\o ]] 00:38:53.229 05:29:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:53.229 05:29:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:53.488 [2024-12-09 05:29:40.216885] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:53.488 [2024-12-09 05:29:40.217002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60480 ] 00:38:53.488 [2024-12-09 05:29:40.364650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.488 [2024-12-09 05:29:40.410519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.488 [2024-12-09 05:29:40.445513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:53.746  [2024-12-09T05:29:40.718Z] Copying: 512/512 [B] (average 250 kBps) 00:38:53.746 00:38:53.746 ************************************ 00:38:53.746 END TEST dd_flags_misc_forced_aio 00:38:53.746 ************************************ 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bxa6eikbgxybrtczzp53ze73xszzzvxxk2tih5agy3btor2dqqt6uwosc06cd1ewjtylxivnukdhy2c69v2suflbx8eqm41z7pb3erpc5o3u328vmmrhxz0i767109dokthp89tlzowlbsgbetbufnqtdxllgjhjx3vmjsa5ngwo0vismwjz65j520nkpqlbry898zvdnrvw2s66mxsl889iqjexo4hqs6p6bbau1e5zaxlaojzz76rzdkietdr6kq25ku3wv7ozgkn55uashbes6dxxyjotuqrs1mpu568hiu3rnpcnq58epv3cm108kj97f2e3gag8huxvtgnwnppl47in6jo1ullrqp1ggexcqfbn4ymff7uj4gwuxeku8v3e2xgpzhqfvbpwcj0osh20mfb4isvei3hm79rzs053k22fjvj1slk7qqs1kc8dcjatpr8s4iusa2l828i4puu5pdr6cs5vjsmp85u53ytquv1kvcbi20fnismg2njo == \b\x\a\6\e\i\k\b\g\x\y\b\r\t\c\z\z\p\5\3\z\e\7\3\x\s\z\z\z\v\x\x\k\2\t\i\h\5\a\g\y\3\b\t\o\r\2\d\q\q\t\6\u\w\o\s\c\0\6\c\d\1\e\w\j\t\y\l\x\i\v\n\u\k\d\h\y\2\c\6\9\v\2\s\u\f\l\b\x\8\e\q\m\4\1\z\7\p\b\3\e\r\p\c\5\o\3\u\3\2\8\v\m\m\r\h\x\z\0\i\7\6\7\1\0\9\d\o\k\t\h\p\8\9\t\l\z\o\w\l\b\s\g\b\e\t\b\u\f\n\q\t\d\x\l\l\g\j\h\j\x\3\v\m\j\s\a\5\n\g\w\o\0\v\i\s\m\w\j\z\6\5\j\5\2\0\n\k\p\q\l\b\r\y\8\9\8\z\v\d\n\r\v\w\2\s\6\6\m\x\s\l\8\8\9\i\q\j\e\x\o\4\h\q\s\6\p\6\b\b\a\u\1\e\5\z\a\x\l\a\o\j\z\z\7\6\r\z\d\k\i\e\t\d\r\6\k\q\2\5\k\u\3\w\v\7\o\z\g\k\n\5\5\u\a\s\h\b\e\s\6\d\x\x\y\j\o\t\u\q\r\s\1\m\p\u\5\6\8\h\i\u\3\r\n\p\c\n\q\5\8\e\p\v\3\c\m\1\0\8\k\j\9\7\f\2\e\3\g\a\g\8\h\u\x\v\t\g\n\w\n\p\p\l\4\7\i\n\6\j\o\1\u\l\l\r\q\p\1\g\g\e\x\c\q\f\b\n\4\y\m\f\f\7\u\j\4\g\w\u\x\e\k\u\8\v\3\e\2\x\g\p\z\h\q\f\v\b\p\w\c\j\0\o\s\h\2\0\m\f\b\4\i\s\v\e\i\3\h\m\7\9\r\z\s\0\5\3\k\2\2\f\j\v\j\1\s\l\k\7\q\q\s\1\k\c\8\d\c\j\a\t\p\r\8\s\4\i\u\s\a\2\l\8\2\8\i\4\p\u\u\5\p\d\r\6\c\s\5\v\j\s\m\p\8\5\u\5\3\y\t\q\u\v\1\k\v\c\b\i\2\0\f\n\i\s\m\g\2\n\j\o ]] 00:38:53.746 00:38:53.746 real 0m4.006s 00:38:53.746 user 0m2.261s 00:38:53.746 sys 0m0.763s 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:53.746 ************************************ 00:38:53.746 END TEST spdk_dd_posix 00:38:53.746 ************************************ 00:38:53.746 00:38:53.746 real 0m18.360s 00:38:53.746 user 0m9.287s 00:38:53.746 sys 0m4.423s 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.746 05:29:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:54.006 05:29:40 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:38:54.006 05:29:40 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:54.006 05:29:40 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.006 05:29:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:54.006 ************************************ 00:38:54.006 START TEST spdk_dd_malloc 00:38:54.006 ************************************ 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:38:54.006 * Looking for test storage... 00:38:54.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.006 --rc genhtml_branch_coverage=1 00:38:54.006 --rc genhtml_function_coverage=1 00:38:54.006 --rc genhtml_legend=1 00:38:54.006 --rc geninfo_all_blocks=1 00:38:54.006 --rc geninfo_unexecuted_blocks=1 00:38:54.006 00:38:54.006 ' 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.006 --rc genhtml_branch_coverage=1 00:38:54.006 --rc genhtml_function_coverage=1 00:38:54.006 --rc genhtml_legend=1 00:38:54.006 --rc geninfo_all_blocks=1 00:38:54.006 --rc geninfo_unexecuted_blocks=1 00:38:54.006 00:38:54.006 ' 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.006 --rc genhtml_branch_coverage=1 00:38:54.006 --rc genhtml_function_coverage=1 00:38:54.006 --rc genhtml_legend=1 00:38:54.006 --rc geninfo_all_blocks=1 00:38:54.006 --rc geninfo_unexecuted_blocks=1 00:38:54.006 00:38:54.006 ' 00:38:54.006 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.006 --rc genhtml_branch_coverage=1 00:38:54.006 --rc genhtml_function_coverage=1 00:38:54.006 --rc genhtml_legend=1 00:38:54.006 --rc geninfo_all_blocks=1 00:38:54.006 --rc geninfo_unexecuted_blocks=1 00:38:54.006 00:38:54.006 ' 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:38:54.007 ************************************ 00:38:54.007 START TEST dd_malloc_copy 00:38:54.007 ************************************ 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:38:54.007 05:29:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:54.266 [2024-12-09 05:29:41.000396] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:54.266 [2024-12-09 05:29:41.000675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:38:54.266 { 00:38:54.266 "subsystems": [ 00:38:54.266 { 00:38:54.266 "subsystem": "bdev", 00:38:54.266 "config": [ 00:38:54.266 { 00:38:54.266 "params": { 00:38:54.266 "block_size": 512, 00:38:54.266 "num_blocks": 1048576, 00:38:54.266 "name": "malloc0" 00:38:54.266 }, 00:38:54.266 "method": "bdev_malloc_create" 00:38:54.266 }, 00:38:54.266 { 00:38:54.266 "params": { 00:38:54.266 "block_size": 512, 00:38:54.266 "num_blocks": 1048576, 00:38:54.266 "name": "malloc1" 00:38:54.266 }, 00:38:54.266 "method": "bdev_malloc_create" 00:38:54.266 }, 00:38:54.266 { 00:38:54.266 "method": "bdev_wait_for_examine" 00:38:54.266 } 00:38:54.266 ] 00:38:54.266 } 00:38:54.266 ] 00:38:54.266 } 00:38:54.266 [2024-12-09 05:29:41.149490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.266 [2024-12-09 05:29:41.184433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.266 [2024-12-09 05:29:41.214859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:55.637  [2024-12-09T05:29:43.542Z] Copying: 197/512 [MB] (197 MBps) [2024-12-09T05:29:44.114Z] Copying: 395/512 [MB] (197 MBps) [2024-12-09T05:29:44.695Z] Copying: 512/512 [MB] (average 197 MBps) 00:38:57.723 00:38:57.723 05:29:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:38:57.723 05:29:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:38:57.723 05:29:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:38:57.723 05:29:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:57.723 [2024-12-09 05:29:44.451050] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:57.723 [2024-12-09 05:29:44.451146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60614 ] 00:38:57.723 { 00:38:57.723 "subsystems": [ 00:38:57.723 { 00:38:57.723 "subsystem": "bdev", 00:38:57.723 "config": [ 00:38:57.723 { 00:38:57.723 "params": { 00:38:57.723 "block_size": 512, 00:38:57.723 "num_blocks": 1048576, 00:38:57.723 "name": "malloc0" 00:38:57.723 }, 00:38:57.723 "method": "bdev_malloc_create" 00:38:57.723 }, 00:38:57.723 { 00:38:57.723 "params": { 00:38:57.723 "block_size": 512, 00:38:57.723 "num_blocks": 1048576, 00:38:57.723 "name": "malloc1" 00:38:57.723 }, 00:38:57.723 "method": "bdev_malloc_create" 00:38:57.723 }, 00:38:57.723 { 00:38:57.723 "method": "bdev_wait_for_examine" 00:38:57.723 } 00:38:57.723 ] 00:38:57.723 } 00:38:57.723 ] 00:38:57.723 } 00:38:57.723 [2024-12-09 05:29:44.605752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.723 [2024-12-09 05:29:44.646777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.723 [2024-12-09 05:29:44.681089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:59.097  [2024-12-09T05:29:47.005Z] Copying: 197/512 [MB] (197 MBps) [2024-12-09T05:29:47.572Z] Copying: 408/512 [MB] (210 MBps) [2024-12-09T05:29:47.830Z] Copying: 512/512 [MB] (average 203 MBps) 00:39:00.858 00:39:01.116 00:39:01.116 real 0m6.881s 00:39:01.116 user 0m6.201s 00:39:01.116 sys 0m0.510s 00:39:01.116 05:29:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.116 ************************************ 00:39:01.116 END TEST dd_malloc_copy 00:39:01.116 ************************************ 00:39:01.116 05:29:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:01.116 ************************************ 00:39:01.116 END TEST spdk_dd_malloc 00:39:01.116 ************************************ 00:39:01.116 00:39:01.116 real 0m7.102s 00:39:01.116 user 0m6.324s 00:39:01.116 sys 0m0.611s 00:39:01.116 05:29:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.116 05:29:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:39:01.116 05:29:47 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:39:01.116 05:29:47 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:01.116 05:29:47 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.116 05:29:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:01.116 ************************************ 00:39:01.116 START TEST spdk_dd_bdev_to_bdev 00:39:01.116 ************************************ 00:39:01.116 05:29:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:39:01.116 * Looking for test storage... 00:39:01.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:01.116 05:29:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:01.116 05:29:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:39:01.116 05:29:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:39:01.116 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.117 --rc genhtml_branch_coverage=1 00:39:01.117 --rc genhtml_function_coverage=1 00:39:01.117 --rc genhtml_legend=1 00:39:01.117 --rc geninfo_all_blocks=1 00:39:01.117 --rc geninfo_unexecuted_blocks=1 00:39:01.117 00:39:01.117 ' 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.117 --rc genhtml_branch_coverage=1 00:39:01.117 --rc genhtml_function_coverage=1 00:39:01.117 --rc genhtml_legend=1 00:39:01.117 --rc geninfo_all_blocks=1 00:39:01.117 --rc geninfo_unexecuted_blocks=1 00:39:01.117 00:39:01.117 ' 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.117 --rc genhtml_branch_coverage=1 00:39:01.117 --rc genhtml_function_coverage=1 00:39:01.117 --rc genhtml_legend=1 00:39:01.117 --rc geninfo_all_blocks=1 00:39:01.117 --rc geninfo_unexecuted_blocks=1 00:39:01.117 00:39:01.117 ' 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:01.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.117 --rc genhtml_branch_coverage=1 00:39:01.117 --rc genhtml_function_coverage=1 00:39:01.117 --rc genhtml_legend=1 00:39:01.117 --rc geninfo_all_blocks=1 00:39:01.117 --rc geninfo_unexecuted_blocks=1 00:39:01.117 00:39:01.117 ' 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:01.117 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.375 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:01.376 ************************************ 00:39:01.376 START TEST dd_inflate_file 00:39:01.376 ************************************ 00:39:01.376 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:39:01.376 [2024-12-09 05:29:48.158398] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:01.376 [2024-12-09 05:29:48.158503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60726 ] 00:39:01.376 [2024-12-09 05:29:48.312342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.634 [2024-12-09 05:29:48.352535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.634 [2024-12-09 05:29:48.385966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:01.634  [2024-12-09T05:29:48.865Z] Copying: 64/64 [MB] (average 1684 MBps) 00:39:01.893 00:39:01.893 00:39:01.893 real 0m0.543s 00:39:01.893 user 0m0.337s 00:39:01.893 sys 0m0.228s 00:39:01.893 ************************************ 00:39:01.893 END TEST dd_inflate_file 00:39:01.893 ************************************ 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:01.893 ************************************ 00:39:01.893 START TEST dd_copy_to_out_bdev 00:39:01.893 ************************************ 00:39:01.893 05:29:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:39:01.893 { 00:39:01.893 "subsystems": [ 00:39:01.893 { 00:39:01.893 "subsystem": "bdev", 00:39:01.893 "config": [ 00:39:01.893 { 00:39:01.893 "params": { 00:39:01.893 "trtype": "pcie", 00:39:01.893 "traddr": "0000:00:10.0", 00:39:01.893 "name": "Nvme0" 00:39:01.893 }, 00:39:01.893 "method": "bdev_nvme_attach_controller" 00:39:01.893 }, 00:39:01.893 { 00:39:01.893 "params": { 00:39:01.893 "trtype": "pcie", 00:39:01.893 "traddr": "0000:00:11.0", 00:39:01.893 "name": "Nvme1" 00:39:01.893 }, 00:39:01.893 "method": "bdev_nvme_attach_controller" 00:39:01.893 }, 00:39:01.893 { 00:39:01.893 "method": "bdev_wait_for_examine" 00:39:01.893 } 00:39:01.893 ] 00:39:01.893 } 00:39:01.893 ] 00:39:01.893 } 00:39:01.893 [2024-12-09 05:29:48.760258] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:01.893 [2024-12-09 05:29:48.760354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60760 ] 00:39:02.153 [2024-12-09 05:29:48.916498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.153 [2024-12-09 05:29:48.955187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.153 [2024-12-09 05:29:48.990511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:03.530  [2024-12-09T05:29:50.502Z] Copying: 48/64 [MB] (48 MBps) [2024-12-09T05:29:50.762Z] Copying: 64/64 [MB] (average 49 MBps) 00:39:03.790 00:39:03.790 00:39:03.790 real 0m1.939s 00:39:03.790 user 0m1.740s 00:39:03.790 sys 0m1.540s 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:03.790 ************************************ 00:39:03.790 END TEST dd_copy_to_out_bdev 00:39:03.790 ************************************ 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:03.790 ************************************ 00:39:03.790 START TEST dd_offset_magic 00:39:03.790 ************************************ 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:03.790 05:29:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:03.790 [2024-12-09 05:29:50.743989] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:03.790 [2024-12-09 05:29:50.744234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60805 ] 00:39:04.050 { 00:39:04.050 "subsystems": [ 00:39:04.050 { 00:39:04.050 "subsystem": "bdev", 00:39:04.050 "config": [ 00:39:04.050 { 00:39:04.050 "params": { 00:39:04.050 "trtype": "pcie", 00:39:04.050 "traddr": "0000:00:10.0", 00:39:04.050 "name": "Nvme0" 00:39:04.050 }, 00:39:04.050 "method": "bdev_nvme_attach_controller" 00:39:04.050 }, 00:39:04.050 { 00:39:04.050 "params": { 00:39:04.050 "trtype": "pcie", 00:39:04.050 "traddr": "0000:00:11.0", 00:39:04.050 "name": "Nvme1" 00:39:04.050 }, 00:39:04.050 "method": "bdev_nvme_attach_controller" 00:39:04.050 }, 00:39:04.050 { 00:39:04.050 "method": "bdev_wait_for_examine" 00:39:04.050 } 00:39:04.050 ] 00:39:04.050 } 00:39:04.050 ] 00:39:04.050 } 00:39:04.050 [2024-12-09 05:29:50.887506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.050 [2024-12-09 05:29:50.917211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.050 [2024-12-09 05:29:50.944239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:04.308  [2024-12-09T05:29:51.538Z] Copying: 65/65 [MB] (average 984 MBps) 00:39:04.566 00:39:04.566 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:39:04.566 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:04.566 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:04.566 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:04.566 [2024-12-09 05:29:51.450813] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:04.566 [2024-12-09 05:29:51.450955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60819 ] 00:39:04.566 { 00:39:04.566 "subsystems": [ 00:39:04.566 { 00:39:04.566 "subsystem": "bdev", 00:39:04.566 "config": [ 00:39:04.566 { 00:39:04.566 "params": { 00:39:04.566 "trtype": "pcie", 00:39:04.566 "traddr": "0000:00:10.0", 00:39:04.566 "name": "Nvme0" 00:39:04.566 }, 00:39:04.566 "method": "bdev_nvme_attach_controller" 00:39:04.566 }, 00:39:04.566 { 00:39:04.566 "params": { 00:39:04.566 "trtype": "pcie", 00:39:04.566 "traddr": "0000:00:11.0", 00:39:04.566 "name": "Nvme1" 00:39:04.566 }, 00:39:04.566 "method": "bdev_nvme_attach_controller" 00:39:04.566 }, 00:39:04.566 { 00:39:04.566 "method": "bdev_wait_for_examine" 00:39:04.566 } 00:39:04.566 ] 00:39:04.566 } 00:39:04.566 ] 00:39:04.566 } 00:39:04.823 [2024-12-09 05:29:51.598322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.823 [2024-12-09 05:29:51.628647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.824 [2024-12-09 05:29:51.656723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:05.081  [2024-12-09T05:29:52.053Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:05.081 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:05.081 05:29:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:05.081 [2024-12-09 05:29:52.039895] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:05.081 [2024-12-09 05:29:52.039999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60836 ] 00:39:05.081 { 00:39:05.081 "subsystems": [ 00:39:05.081 { 00:39:05.081 "subsystem": "bdev", 00:39:05.081 "config": [ 00:39:05.081 { 00:39:05.081 "params": { 00:39:05.081 "trtype": "pcie", 00:39:05.081 "traddr": "0000:00:10.0", 00:39:05.081 "name": "Nvme0" 00:39:05.081 }, 00:39:05.081 "method": "bdev_nvme_attach_controller" 00:39:05.081 }, 00:39:05.081 { 00:39:05.081 "params": { 00:39:05.081 "trtype": "pcie", 00:39:05.081 "traddr": "0000:00:11.0", 00:39:05.081 "name": "Nvme1" 00:39:05.081 }, 00:39:05.081 "method": "bdev_nvme_attach_controller" 00:39:05.081 }, 00:39:05.081 { 00:39:05.081 "method": "bdev_wait_for_examine" 00:39:05.081 } 00:39:05.081 ] 00:39:05.081 } 00:39:05.081 ] 00:39:05.081 } 00:39:05.338 [2024-12-09 05:29:52.190731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.338 [2024-12-09 05:29:52.221560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.338 [2024-12-09 05:29:52.251644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:05.596  [2024-12-09T05:29:52.826Z] Copying: 65/65 [MB] (average 1083 MBps) 00:39:05.854 00:39:05.854 05:29:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:39:05.854 05:29:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:05.854 05:29:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:05.854 05:29:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:05.854 { 00:39:05.854 "subsystems": [ 00:39:05.854 { 00:39:05.854 "subsystem": "bdev", 00:39:05.854 "config": [ 00:39:05.854 { 00:39:05.854 "params": { 00:39:05.854 "trtype": "pcie", 00:39:05.854 "traddr": "0000:00:10.0", 00:39:05.854 "name": "Nvme0" 00:39:05.854 }, 00:39:05.854 "method": "bdev_nvme_attach_controller" 00:39:05.854 }, 00:39:05.854 { 00:39:05.854 "params": { 00:39:05.854 "trtype": "pcie", 00:39:05.854 "traddr": "0000:00:11.0", 00:39:05.854 "name": "Nvme1" 00:39:05.854 }, 00:39:05.854 "method": "bdev_nvme_attach_controller" 00:39:05.854 }, 00:39:05.854 { 00:39:05.854 "method": "bdev_wait_for_examine" 00:39:05.854 } 00:39:05.854 ] 00:39:05.854 } 00:39:05.854 ] 00:39:05.854 } 00:39:05.854 [2024-12-09 05:29:52.777123] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:05.854 [2024-12-09 05:29:52.777227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60856 ] 00:39:06.111 [2024-12-09 05:29:52.929262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.111 [2024-12-09 05:29:52.962860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.111 [2024-12-09 05:29:52.994622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:06.369  [2024-12-09T05:29:53.341Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:06.369 00:39:06.626 ************************************ 00:39:06.626 END TEST dd_offset_magic 00:39:06.626 ************************************ 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:06.626 00:39:06.626 real 0m2.641s 00:39:06.626 user 0m1.995s 00:39:06.626 sys 0m0.651s 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:06.626 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:06.626 [2024-12-09 05:29:53.438600] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:06.626 [2024-12-09 05:29:53.438693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60887 ] 00:39:06.626 { 00:39:06.626 "subsystems": [ 00:39:06.626 { 00:39:06.626 "subsystem": "bdev", 00:39:06.626 "config": [ 00:39:06.626 { 00:39:06.626 "params": { 00:39:06.627 "trtype": "pcie", 00:39:06.627 "traddr": "0000:00:10.0", 00:39:06.627 "name": "Nvme0" 00:39:06.627 }, 00:39:06.627 "method": "bdev_nvme_attach_controller" 00:39:06.627 }, 00:39:06.627 { 00:39:06.627 "params": { 00:39:06.627 "trtype": "pcie", 00:39:06.627 "traddr": "0000:00:11.0", 00:39:06.627 "name": "Nvme1" 00:39:06.627 }, 00:39:06.627 "method": "bdev_nvme_attach_controller" 00:39:06.627 }, 00:39:06.627 { 00:39:06.627 "method": "bdev_wait_for_examine" 00:39:06.627 } 00:39:06.627 ] 00:39:06.627 } 00:39:06.627 ] 00:39:06.627 } 00:39:06.627 [2024-12-09 05:29:53.588687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.885 [2024-12-09 05:29:53.618359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.885 [2024-12-09 05:29:53.645462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:06.885  [2024-12-09T05:29:54.115Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:39:07.143 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:07.143 05:29:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:07.143 [2024-12-09 05:29:54.033806] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:07.143 [2024-12-09 05:29:54.033902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60903 ] 00:39:07.143 { 00:39:07.143 "subsystems": [ 00:39:07.143 { 00:39:07.143 "subsystem": "bdev", 00:39:07.143 "config": [ 00:39:07.143 { 00:39:07.143 "params": { 00:39:07.143 "trtype": "pcie", 00:39:07.143 "traddr": "0000:00:10.0", 00:39:07.143 "name": "Nvme0" 00:39:07.143 }, 00:39:07.143 "method": "bdev_nvme_attach_controller" 00:39:07.143 }, 00:39:07.143 { 00:39:07.143 "params": { 00:39:07.143 "trtype": "pcie", 00:39:07.143 "traddr": "0000:00:11.0", 00:39:07.143 "name": "Nvme1" 00:39:07.143 }, 00:39:07.143 "method": "bdev_nvme_attach_controller" 00:39:07.143 }, 00:39:07.143 { 00:39:07.143 "method": "bdev_wait_for_examine" 00:39:07.143 } 00:39:07.143 ] 00:39:07.143 } 00:39:07.143 ] 00:39:07.143 } 00:39:07.401 [2024-12-09 05:29:54.183717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.401 [2024-12-09 05:29:54.217654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.401 [2024-12-09 05:29:54.249155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:07.659  [2024-12-09T05:29:54.631Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:39:07.659 00:39:07.659 05:29:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:39:07.659 ************************************ 00:39:07.659 END TEST spdk_dd_bdev_to_bdev 00:39:07.659 ************************************ 00:39:07.659 00:39:07.659 real 0m6.711s 00:39:07.659 user 0m5.137s 00:39:07.659 sys 0m2.975s 00:39:07.659 05:29:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.659 05:29:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:07.917 05:29:54 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:39:07.917 05:29:54 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:39:07.917 05:29:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:07.917 05:29:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.917 05:29:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:07.917 ************************************ 00:39:07.917 START TEST spdk_dd_uring 00:39:07.917 ************************************ 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:39:07.917 * Looking for test storage... 00:39:07.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.917 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.918 --rc genhtml_branch_coverage=1 00:39:07.918 --rc genhtml_function_coverage=1 00:39:07.918 --rc genhtml_legend=1 00:39:07.918 --rc geninfo_all_blocks=1 00:39:07.918 --rc geninfo_unexecuted_blocks=1 00:39:07.918 00:39:07.918 ' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.918 --rc genhtml_branch_coverage=1 00:39:07.918 --rc genhtml_function_coverage=1 00:39:07.918 --rc genhtml_legend=1 00:39:07.918 --rc geninfo_all_blocks=1 00:39:07.918 --rc geninfo_unexecuted_blocks=1 00:39:07.918 00:39:07.918 ' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.918 --rc genhtml_branch_coverage=1 00:39:07.918 --rc genhtml_function_coverage=1 00:39:07.918 --rc genhtml_legend=1 00:39:07.918 --rc geninfo_all_blocks=1 00:39:07.918 --rc geninfo_unexecuted_blocks=1 00:39:07.918 00:39:07.918 ' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.918 --rc genhtml_branch_coverage=1 00:39:07.918 --rc genhtml_function_coverage=1 00:39:07.918 --rc genhtml_legend=1 00:39:07.918 --rc geninfo_all_blocks=1 00:39:07.918 --rc geninfo_unexecuted_blocks=1 00:39:07.918 00:39:07.918 ' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.918 05:29:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:39:08.177 ************************************ 00:39:08.177 START TEST dd_uring_copy 00:39:08.177 ************************************ 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=2nzyxynh8tsffzsdh946gy7wtsf8q5y7cwgv6n72n090zpk2itb3p61inncnhee0szihurdy2l1lxwbkm8xlsbkk9o2bp7mxdj7cizx56chynbhxsbc74qbqf0tse7vlm3ogt64aikvj6ti0y299eh5cg611uzxtk5b51bf7x308pzbkyuijmlm72tlfk169k51jq69ak4ifbqgti81xq22w3gpun8vv433uil0zki4hu4v9vdxu51lxyfkwn7r9z6ko5dewbl0rudyelnh526lyx93ngbzl4podn2ypaot4vd6wxm3zaoqqh2fc3nke7ma1b3kne6w9lptcj5j7bktz4cod346wj5h16xhaa51bzmip1rapll6pr81qa3elztsl4qgv1tegm2c0noxrbl2aiigqmj2e8xc8b2ojhjy6nd1kdejl5w0iduz79fl3cgexk0zm1u012pqwuk5119i1fsaafyv61mrgdwlix84r72bzuqamrtjrjdoti7l0zdc9k7llbu9h1zi1szokf6mde0cxjduhhpm6bocnrn8pkxv3am23c8yhhjl5y2m8o1ztq4bpejzfw2vabioz87qsooxm0m6clf8408cpfe4pp7dxt4o132lyau65m84fim7rogl08m3w1sns2zzu4j9qe0tqppq06jo7ctifyy2xl6xfpeu5yiddst5dkk8f0msf6hhxinrctgpw1zcrl9ou5u9jwu4qvefmbbuwdix9byvlox4giir30gm9w8x97xm27tbqaz4q8lm4aijl3brtoyppe780pmh2pe2x9uxkc6ek37j13t8okj3lgyce2i3b4uqkb6yz7s3rf44gugdwswmsng3g9jlqwcx8m67dd1nxpjb21hcbrb15b346lbfyve5997dz15x4khd8fr9yh8427xbfp6g3fsx1l8aeps98kzd4z15v0boul1w50ovrmfy2a6th7jzbdr4s292qqfqj5tr3jvh92xtnpmk947r041mz6piikbft47w1 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 2nzyxynh8tsffzsdh946gy7wtsf8q5y7cwgv6n72n090zpk2itb3p61inncnhee0szihurdy2l1lxwbkm8xlsbkk9o2bp7mxdj7cizx56chynbhxsbc74qbqf0tse7vlm3ogt64aikvj6ti0y299eh5cg611uzxtk5b51bf7x308pzbkyuijmlm72tlfk169k51jq69ak4ifbqgti81xq22w3gpun8vv433uil0zki4hu4v9vdxu51lxyfkwn7r9z6ko5dewbl0rudyelnh526lyx93ngbzl4podn2ypaot4vd6wxm3zaoqqh2fc3nke7ma1b3kne6w9lptcj5j7bktz4cod346wj5h16xhaa51bzmip1rapll6pr81qa3elztsl4qgv1tegm2c0noxrbl2aiigqmj2e8xc8b2ojhjy6nd1kdejl5w0iduz79fl3cgexk0zm1u012pqwuk5119i1fsaafyv61mrgdwlix84r72bzuqamrtjrjdoti7l0zdc9k7llbu9h1zi1szokf6mde0cxjduhhpm6bocnrn8pkxv3am23c8yhhjl5y2m8o1ztq4bpejzfw2vabioz87qsooxm0m6clf8408cpfe4pp7dxt4o132lyau65m84fim7rogl08m3w1sns2zzu4j9qe0tqppq06jo7ctifyy2xl6xfpeu5yiddst5dkk8f0msf6hhxinrctgpw1zcrl9ou5u9jwu4qvefmbbuwdix9byvlox4giir30gm9w8x97xm27tbqaz4q8lm4aijl3brtoyppe780pmh2pe2x9uxkc6ek37j13t8okj3lgyce2i3b4uqkb6yz7s3rf44gugdwswmsng3g9jlqwcx8m67dd1nxpjb21hcbrb15b346lbfyve5997dz15x4khd8fr9yh8427xbfp6g3fsx1l8aeps98kzd4z15v0boul1w50ovrmfy2a6th7jzbdr4s292qqfqj5tr3jvh92xtnpmk947r041mz6piikbft47w1 00:39:08.177 05:29:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:39:08.177 [2024-12-09 05:29:55.006099] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:08.177 [2024-12-09 05:29:55.006255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:39:08.435 [2024-12-09 05:29:55.175850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.435 [2024-12-09 05:29:55.229798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.435 [2024-12-09 05:29:55.263838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:09.001  [2024-12-09T05:29:56.230Z] Copying: 511/511 [MB] (average 1599 MBps) 00:39:09.258 00:39:09.258 05:29:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:39:09.258 05:29:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:39:09.258 05:29:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:09.258 05:29:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:09.258 [2024-12-09 05:29:56.085428] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:09.259 [2024-12-09 05:29:56.085522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60997 ] 00:39:09.259 { 00:39:09.259 "subsystems": [ 00:39:09.259 { 00:39:09.259 "subsystem": "bdev", 00:39:09.259 "config": [ 00:39:09.259 { 00:39:09.259 "params": { 00:39:09.259 "block_size": 512, 00:39:09.259 "num_blocks": 1048576, 00:39:09.259 "name": "malloc0" 00:39:09.259 }, 00:39:09.259 "method": "bdev_malloc_create" 00:39:09.259 }, 00:39:09.259 { 00:39:09.259 "params": { 00:39:09.259 "filename": "/dev/zram1", 00:39:09.259 "name": "uring0" 00:39:09.259 }, 00:39:09.259 "method": "bdev_uring_create" 00:39:09.259 }, 00:39:09.259 { 00:39:09.259 "method": "bdev_wait_for_examine" 00:39:09.259 } 00:39:09.259 ] 00:39:09.259 } 00:39:09.259 ] 00:39:09.259 } 00:39:09.517 [2024-12-09 05:29:56.233596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.517 [2024-12-09 05:29:56.267027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.517 [2024-12-09 05:29:56.298129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:10.458  [2024-12-09T05:29:58.808Z] Copying: 219/512 [MB] (219 MBps) [2024-12-09T05:29:58.808Z] Copying: 434/512 [MB] (215 MBps) [2024-12-09T05:29:59.376Z] Copying: 512/512 [MB] (average 216 MBps) 00:39:12.404 00:39:12.405 05:29:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:39:12.405 05:29:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:39:12.405 05:29:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:12.405 05:29:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:12.405 { 00:39:12.405 "subsystems": [ 00:39:12.405 { 00:39:12.405 "subsystem": "bdev", 00:39:12.405 "config": [ 00:39:12.405 { 00:39:12.405 "params": { 00:39:12.405 "block_size": 512, 00:39:12.405 "num_blocks": 1048576, 00:39:12.405 "name": "malloc0" 00:39:12.405 }, 00:39:12.405 "method": "bdev_malloc_create" 00:39:12.405 }, 00:39:12.405 { 00:39:12.405 "params": { 00:39:12.405 "filename": "/dev/zram1", 00:39:12.405 "name": "uring0" 00:39:12.405 }, 00:39:12.405 "method": "bdev_uring_create" 00:39:12.405 }, 00:39:12.405 { 00:39:12.405 "method": "bdev_wait_for_examine" 00:39:12.405 } 00:39:12.405 ] 00:39:12.405 } 00:39:12.405 ] 00:39:12.405 } 00:39:12.405 [2024-12-09 05:29:59.127399] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:12.405 [2024-12-09 05:29:59.127497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61041 ] 00:39:12.405 [2024-12-09 05:29:59.280947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.405 [2024-12-09 05:29:59.314392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.405 [2024-12-09 05:29:59.346656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:13.783  [2024-12-09T05:30:01.691Z] Copying: 170/512 [MB] (170 MBps) [2024-12-09T05:30:02.621Z] Copying: 334/512 [MB] (164 MBps) [2024-12-09T05:30:02.621Z] Copying: 494/512 [MB] (160 MBps) [2024-12-09T05:30:02.879Z] Copying: 512/512 [MB] (average 163 MBps) 00:39:15.907 00:39:15.907 05:30:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:39:15.907 05:30:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 2nzyxynh8tsffzsdh946gy7wtsf8q5y7cwgv6n72n090zpk2itb3p61inncnhee0szihurdy2l1lxwbkm8xlsbkk9o2bp7mxdj7cizx56chynbhxsbc74qbqf0tse7vlm3ogt64aikvj6ti0y299eh5cg611uzxtk5b51bf7x308pzbkyuijmlm72tlfk169k51jq69ak4ifbqgti81xq22w3gpun8vv433uil0zki4hu4v9vdxu51lxyfkwn7r9z6ko5dewbl0rudyelnh526lyx93ngbzl4podn2ypaot4vd6wxm3zaoqqh2fc3nke7ma1b3kne6w9lptcj5j7bktz4cod346wj5h16xhaa51bzmip1rapll6pr81qa3elztsl4qgv1tegm2c0noxrbl2aiigqmj2e8xc8b2ojhjy6nd1kdejl5w0iduz79fl3cgexk0zm1u012pqwuk5119i1fsaafyv61mrgdwlix84r72bzuqamrtjrjdoti7l0zdc9k7llbu9h1zi1szokf6mde0cxjduhhpm6bocnrn8pkxv3am23c8yhhjl5y2m8o1ztq4bpejzfw2vabioz87qsooxm0m6clf8408cpfe4pp7dxt4o132lyau65m84fim7rogl08m3w1sns2zzu4j9qe0tqppq06jo7ctifyy2xl6xfpeu5yiddst5dkk8f0msf6hhxinrctgpw1zcrl9ou5u9jwu4qvefmbbuwdix9byvlox4giir30gm9w8x97xm27tbqaz4q8lm4aijl3brtoyppe780pmh2pe2x9uxkc6ek37j13t8okj3lgyce2i3b4uqkb6yz7s3rf44gugdwswmsng3g9jlqwcx8m67dd1nxpjb21hcbrb15b346lbfyve5997dz15x4khd8fr9yh8427xbfp6g3fsx1l8aeps98kzd4z15v0boul1w50ovrmfy2a6th7jzbdr4s292qqfqj5tr3jvh92xtnpmk947r041mz6piikbft47w1 == \2\n\z\y\x\y\n\h\8\t\s\f\f\z\s\d\h\9\4\6\g\y\7\w\t\s\f\8\q\5\y\7\c\w\g\v\6\n\7\2\n\0\9\0\z\p\k\2\i\t\b\3\p\6\1\i\n\n\c\n\h\e\e\0\s\z\i\h\u\r\d\y\2\l\1\l\x\w\b\k\m\8\x\l\s\b\k\k\9\o\2\b\p\7\m\x\d\j\7\c\i\z\x\5\6\c\h\y\n\b\h\x\s\b\c\7\4\q\b\q\f\0\t\s\e\7\v\l\m\3\o\g\t\6\4\a\i\k\v\j\6\t\i\0\y\2\9\9\e\h\5\c\g\6\1\1\u\z\x\t\k\5\b\5\1\b\f\7\x\3\0\8\p\z\b\k\y\u\i\j\m\l\m\7\2\t\l\f\k\1\6\9\k\5\1\j\q\6\9\a\k\4\i\f\b\q\g\t\i\8\1\x\q\2\2\w\3\g\p\u\n\8\v\v\4\3\3\u\i\l\0\z\k\i\4\h\u\4\v\9\v\d\x\u\5\1\l\x\y\f\k\w\n\7\r\9\z\6\k\o\5\d\e\w\b\l\0\r\u\d\y\e\l\n\h\5\2\6\l\y\x\9\3\n\g\b\z\l\4\p\o\d\n\2\y\p\a\o\t\4\v\d\6\w\x\m\3\z\a\o\q\q\h\2\f\c\3\n\k\e\7\m\a\1\b\3\k\n\e\6\w\9\l\p\t\c\j\5\j\7\b\k\t\z\4\c\o\d\3\4\6\w\j\5\h\1\6\x\h\a\a\5\1\b\z\m\i\p\1\r\a\p\l\l\6\p\r\8\1\q\a\3\e\l\z\t\s\l\4\q\g\v\1\t\e\g\m\2\c\0\n\o\x\r\b\l\2\a\i\i\g\q\m\j\2\e\8\x\c\8\b\2\o\j\h\j\y\6\n\d\1\k\d\e\j\l\5\w\0\i\d\u\z\7\9\f\l\3\c\g\e\x\k\0\z\m\1\u\0\1\2\p\q\w\u\k\5\1\1\9\i\1\f\s\a\a\f\y\v\6\1\m\r\g\d\w\l\i\x\8\4\r\7\2\b\z\u\q\a\m\r\t\j\r\j\d\o\t\i\7\l\0\z\d\c\9\k\7\l\l\b\u\9\h\1\z\i\1\s\z\o\k\f\6\m\d\e\0\c\x\j\d\u\h\h\p\m\6\b\o\c\n\r\n\8\p\k\x\v\3\a\m\2\3\c\8\y\h\h\j\l\5\y\2\m\8\o\1\z\t\q\4\b\p\e\j\z\f\w\2\v\a\b\i\o\z\8\7\q\s\o\o\x\m\0\m\6\c\l\f\8\4\0\8\c\p\f\e\4\p\p\7\d\x\t\4\o\1\3\2\l\y\a\u\6\5\m\8\4\f\i\m\7\r\o\g\l\0\8\m\3\w\1\s\n\s\2\z\z\u\4\j\9\q\e\0\t\q\p\p\q\0\6\j\o\7\c\t\i\f\y\y\2\x\l\6\x\f\p\e\u\5\y\i\d\d\s\t\5\d\k\k\8\f\0\m\s\f\6\h\h\x\i\n\r\c\t\g\p\w\1\z\c\r\l\9\o\u\5\u\9\j\w\u\4\q\v\e\f\m\b\b\u\w\d\i\x\9\b\y\v\l\o\x\4\g\i\i\r\3\0\g\m\9\w\8\x\9\7\x\m\2\7\t\b\q\a\z\4\q\8\l\m\4\a\i\j\l\3\b\r\t\o\y\p\p\e\7\8\0\p\m\h\2\p\e\2\x\9\u\x\k\c\6\e\k\3\7\j\1\3\t\8\o\k\j\3\l\g\y\c\e\2\i\3\b\4\u\q\k\b\6\y\z\7\s\3\r\f\4\4\g\u\g\d\w\s\w\m\s\n\g\3\g\9\j\l\q\w\c\x\8\m\6\7\d\d\1\n\x\p\j\b\2\1\h\c\b\r\b\1\5\b\3\4\6\l\b\f\y\v\e\5\9\9\7\d\z\1\5\x\4\k\h\d\8\f\r\9\y\h\8\4\2\7\x\b\f\p\6\g\3\f\s\x\1\l\8\a\e\p\s\9\8\k\z\d\4\z\1\5\v\0\b\o\u\l\1\w\5\0\o\v\r\m\f\y\2\a\6\t\h\7\j\z\b\d\r\4\s\2\9\2\q\q\f\q\j\5\t\r\3\j\v\h\9\2\x\t\n\p\m\k\9\4\7\r\0\4\1\m\z\6\p\i\i\k\b\f\t\4\7\w\1 ]] 00:39:15.907 05:30:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:39:15.907 05:30:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 2nzyxynh8tsffzsdh946gy7wtsf8q5y7cwgv6n72n090zpk2itb3p61inncnhee0szihurdy2l1lxwbkm8xlsbkk9o2bp7mxdj7cizx56chynbhxsbc74qbqf0tse7vlm3ogt64aikvj6ti0y299eh5cg611uzxtk5b51bf7x308pzbkyuijmlm72tlfk169k51jq69ak4ifbqgti81xq22w3gpun8vv433uil0zki4hu4v9vdxu51lxyfkwn7r9z6ko5dewbl0rudyelnh526lyx93ngbzl4podn2ypaot4vd6wxm3zaoqqh2fc3nke7ma1b3kne6w9lptcj5j7bktz4cod346wj5h16xhaa51bzmip1rapll6pr81qa3elztsl4qgv1tegm2c0noxrbl2aiigqmj2e8xc8b2ojhjy6nd1kdejl5w0iduz79fl3cgexk0zm1u012pqwuk5119i1fsaafyv61mrgdwlix84r72bzuqamrtjrjdoti7l0zdc9k7llbu9h1zi1szokf6mde0cxjduhhpm6bocnrn8pkxv3am23c8yhhjl5y2m8o1ztq4bpejzfw2vabioz87qsooxm0m6clf8408cpfe4pp7dxt4o132lyau65m84fim7rogl08m3w1sns2zzu4j9qe0tqppq06jo7ctifyy2xl6xfpeu5yiddst5dkk8f0msf6hhxinrctgpw1zcrl9ou5u9jwu4qvefmbbuwdix9byvlox4giir30gm9w8x97xm27tbqaz4q8lm4aijl3brtoyppe780pmh2pe2x9uxkc6ek37j13t8okj3lgyce2i3b4uqkb6yz7s3rf44gugdwswmsng3g9jlqwcx8m67dd1nxpjb21hcbrb15b346lbfyve5997dz15x4khd8fr9yh8427xbfp6g3fsx1l8aeps98kzd4z15v0boul1w50ovrmfy2a6th7jzbdr4s292qqfqj5tr3jvh92xtnpmk947r041mz6piikbft47w1 == \2\n\z\y\x\y\n\h\8\t\s\f\f\z\s\d\h\9\4\6\g\y\7\w\t\s\f\8\q\5\y\7\c\w\g\v\6\n\7\2\n\0\9\0\z\p\k\2\i\t\b\3\p\6\1\i\n\n\c\n\h\e\e\0\s\z\i\h\u\r\d\y\2\l\1\l\x\w\b\k\m\8\x\l\s\b\k\k\9\o\2\b\p\7\m\x\d\j\7\c\i\z\x\5\6\c\h\y\n\b\h\x\s\b\c\7\4\q\b\q\f\0\t\s\e\7\v\l\m\3\o\g\t\6\4\a\i\k\v\j\6\t\i\0\y\2\9\9\e\h\5\c\g\6\1\1\u\z\x\t\k\5\b\5\1\b\f\7\x\3\0\8\p\z\b\k\y\u\i\j\m\l\m\7\2\t\l\f\k\1\6\9\k\5\1\j\q\6\9\a\k\4\i\f\b\q\g\t\i\8\1\x\q\2\2\w\3\g\p\u\n\8\v\v\4\3\3\u\i\l\0\z\k\i\4\h\u\4\v\9\v\d\x\u\5\1\l\x\y\f\k\w\n\7\r\9\z\6\k\o\5\d\e\w\b\l\0\r\u\d\y\e\l\n\h\5\2\6\l\y\x\9\3\n\g\b\z\l\4\p\o\d\n\2\y\p\a\o\t\4\v\d\6\w\x\m\3\z\a\o\q\q\h\2\f\c\3\n\k\e\7\m\a\1\b\3\k\n\e\6\w\9\l\p\t\c\j\5\j\7\b\k\t\z\4\c\o\d\3\4\6\w\j\5\h\1\6\x\h\a\a\5\1\b\z\m\i\p\1\r\a\p\l\l\6\p\r\8\1\q\a\3\e\l\z\t\s\l\4\q\g\v\1\t\e\g\m\2\c\0\n\o\x\r\b\l\2\a\i\i\g\q\m\j\2\e\8\x\c\8\b\2\o\j\h\j\y\6\n\d\1\k\d\e\j\l\5\w\0\i\d\u\z\7\9\f\l\3\c\g\e\x\k\0\z\m\1\u\0\1\2\p\q\w\u\k\5\1\1\9\i\1\f\s\a\a\f\y\v\6\1\m\r\g\d\w\l\i\x\8\4\r\7\2\b\z\u\q\a\m\r\t\j\r\j\d\o\t\i\7\l\0\z\d\c\9\k\7\l\l\b\u\9\h\1\z\i\1\s\z\o\k\f\6\m\d\e\0\c\x\j\d\u\h\h\p\m\6\b\o\c\n\r\n\8\p\k\x\v\3\a\m\2\3\c\8\y\h\h\j\l\5\y\2\m\8\o\1\z\t\q\4\b\p\e\j\z\f\w\2\v\a\b\i\o\z\8\7\q\s\o\o\x\m\0\m\6\c\l\f\8\4\0\8\c\p\f\e\4\p\p\7\d\x\t\4\o\1\3\2\l\y\a\u\6\5\m\8\4\f\i\m\7\r\o\g\l\0\8\m\3\w\1\s\n\s\2\z\z\u\4\j\9\q\e\0\t\q\p\p\q\0\6\j\o\7\c\t\i\f\y\y\2\x\l\6\x\f\p\e\u\5\y\i\d\d\s\t\5\d\k\k\8\f\0\m\s\f\6\h\h\x\i\n\r\c\t\g\p\w\1\z\c\r\l\9\o\u\5\u\9\j\w\u\4\q\v\e\f\m\b\b\u\w\d\i\x\9\b\y\v\l\o\x\4\g\i\i\r\3\0\g\m\9\w\8\x\9\7\x\m\2\7\t\b\q\a\z\4\q\8\l\m\4\a\i\j\l\3\b\r\t\o\y\p\p\e\7\8\0\p\m\h\2\p\e\2\x\9\u\x\k\c\6\e\k\3\7\j\1\3\t\8\o\k\j\3\l\g\y\c\e\2\i\3\b\4\u\q\k\b\6\y\z\7\s\3\r\f\4\4\g\u\g\d\w\s\w\m\s\n\g\3\g\9\j\l\q\w\c\x\8\m\6\7\d\d\1\n\x\p\j\b\2\1\h\c\b\r\b\1\5\b\3\4\6\l\b\f\y\v\e\5\9\9\7\d\z\1\5\x\4\k\h\d\8\f\r\9\y\h\8\4\2\7\x\b\f\p\6\g\3\f\s\x\1\l\8\a\e\p\s\9\8\k\z\d\4\z\1\5\v\0\b\o\u\l\1\w\5\0\o\v\r\m\f\y\2\a\6\t\h\7\j\z\b\d\r\4\s\2\9\2\q\q\f\q\j\5\t\r\3\j\v\h\9\2\x\t\n\p\m\k\9\4\7\r\0\4\1\m\z\6\p\i\i\k\b\f\t\4\7\w\1 ]] 00:39:15.907 05:30:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:39:16.472 05:30:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:39:16.472 05:30:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:39:16.472 05:30:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:16.472 05:30:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:16.472 [2024-12-09 05:30:03.282901] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:16.472 [2024-12-09 05:30:03.282998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61111 ] 00:39:16.472 { 00:39:16.472 "subsystems": [ 00:39:16.472 { 00:39:16.472 "subsystem": "bdev", 00:39:16.472 "config": [ 00:39:16.472 { 00:39:16.472 "params": { 00:39:16.472 "block_size": 512, 00:39:16.472 "num_blocks": 1048576, 00:39:16.472 "name": "malloc0" 00:39:16.472 }, 00:39:16.472 "method": "bdev_malloc_create" 00:39:16.472 }, 00:39:16.472 { 00:39:16.472 "params": { 00:39:16.472 "filename": "/dev/zram1", 00:39:16.472 "name": "uring0" 00:39:16.472 }, 00:39:16.472 "method": "bdev_uring_create" 00:39:16.472 }, 00:39:16.472 { 00:39:16.472 "method": "bdev_wait_for_examine" 00:39:16.472 } 00:39:16.472 ] 00:39:16.472 } 00:39:16.472 ] 00:39:16.472 } 00:39:16.472 [2024-12-09 05:30:03.431602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.729 [2024-12-09 05:30:03.464489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.729 [2024-12-09 05:30:03.493870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:17.662  [2024-12-09T05:30:06.003Z] Copying: 150/512 [MB] (150 MBps) [2024-12-09T05:30:06.938Z] Copying: 306/512 [MB] (155 MBps) [2024-12-09T05:30:07.196Z] Copying: 454/512 [MB] (147 MBps) [2024-12-09T05:30:07.456Z] Copying: 512/512 [MB] (average 150 MBps) 00:39:20.484 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:20.484 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:20.484 { 00:39:20.484 "subsystems": [ 00:39:20.484 { 00:39:20.484 "subsystem": "bdev", 00:39:20.484 "config": [ 00:39:20.484 { 00:39:20.484 "params": { 00:39:20.484 "block_size": 512, 00:39:20.484 "num_blocks": 1048576, 00:39:20.484 "name": "malloc0" 00:39:20.484 }, 00:39:20.484 "method": "bdev_malloc_create" 00:39:20.484 }, 00:39:20.484 { 00:39:20.484 "params": { 00:39:20.484 "filename": "/dev/zram1", 00:39:20.484 "name": "uring0" 00:39:20.484 }, 00:39:20.484 "method": "bdev_uring_create" 00:39:20.484 }, 00:39:20.484 { 00:39:20.484 "params": { 00:39:20.484 "name": "uring0" 00:39:20.484 }, 00:39:20.484 "method": "bdev_uring_delete" 00:39:20.484 }, 00:39:20.484 { 00:39:20.484 "method": "bdev_wait_for_examine" 00:39:20.484 } 00:39:20.484 ] 00:39:20.484 } 00:39:20.484 ] 00:39:20.484 } 00:39:20.484 [2024-12-09 05:30:07.356239] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:20.484 [2024-12-09 05:30:07.356357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:39:20.742 [2024-12-09 05:30:07.513643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.742 [2024-12-09 05:30:07.546675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.742 [2024-12-09 05:30:07.576620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:20.742  [2024-12-09T05:30:07.972Z] Copying: 0/0 [B] (average 0 Bps) 00:39:21.000 00:39:21.258 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:21.259 05:30:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:39:21.259 [2024-12-09 05:30:08.032428] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:21.259 [2024-12-09 05:30:08.032757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:39:21.259 { 00:39:21.259 "subsystems": [ 00:39:21.259 { 00:39:21.259 "subsystem": "bdev", 00:39:21.259 "config": [ 00:39:21.259 { 00:39:21.259 "params": { 00:39:21.259 "block_size": 512, 00:39:21.259 "num_blocks": 1048576, 00:39:21.259 "name": "malloc0" 00:39:21.259 }, 00:39:21.259 "method": "bdev_malloc_create" 00:39:21.259 }, 00:39:21.259 { 00:39:21.259 "params": { 00:39:21.259 "filename": "/dev/zram1", 00:39:21.259 "name": "uring0" 00:39:21.259 }, 00:39:21.259 "method": "bdev_uring_create" 00:39:21.259 }, 00:39:21.259 { 00:39:21.259 "params": { 00:39:21.259 "name": "uring0" 00:39:21.259 }, 00:39:21.259 "method": "bdev_uring_delete" 00:39:21.259 }, 00:39:21.259 { 00:39:21.259 "method": "bdev_wait_for_examine" 00:39:21.259 } 00:39:21.259 ] 00:39:21.259 } 00:39:21.259 ] 00:39:21.259 } 00:39:21.259 [2024-12-09 05:30:08.190296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.517 [2024-12-09 05:30:08.229060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.517 [2024-12-09 05:30:08.262834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:21.517 [2024-12-09 05:30:08.399011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:39:21.517 [2024-12-09 05:30:08.399072] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:39:21.517 [2024-12-09 05:30:08.399086] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:39:21.517 [2024-12-09 05:30:08.399099] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:21.776 [2024-12-09 05:30:08.568320] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:39:21.776 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:39:22.034 00:39:22.034 real 0m14.021s 00:39:22.034 user 0m9.806s 00:39:22.034 sys 0m11.904s 00:39:22.034 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.034 05:30:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:39:22.034 ************************************ 00:39:22.034 END TEST dd_uring_copy 00:39:22.034 ************************************ 00:39:22.034 00:39:22.034 real 0m14.289s 00:39:22.034 user 0m9.956s 00:39:22.034 sys 0m12.019s 00:39:22.034 05:30:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.034 05:30:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:39:22.035 ************************************ 00:39:22.035 END TEST spdk_dd_uring 00:39:22.035 ************************************ 00:39:22.035 05:30:08 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:22.035 05:30:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:22.035 05:30:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.035 05:30:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:22.035 ************************************ 00:39:22.035 START TEST spdk_dd_sparse 00:39:22.035 ************************************ 00:39:22.035 05:30:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:22.293 * Looking for test storage... 00:39:22.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.293 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:22.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.294 --rc genhtml_branch_coverage=1 00:39:22.294 --rc genhtml_function_coverage=1 00:39:22.294 --rc genhtml_legend=1 00:39:22.294 --rc geninfo_all_blocks=1 00:39:22.294 --rc geninfo_unexecuted_blocks=1 00:39:22.294 00:39:22.294 ' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:22.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.294 --rc genhtml_branch_coverage=1 00:39:22.294 --rc genhtml_function_coverage=1 00:39:22.294 --rc genhtml_legend=1 00:39:22.294 --rc geninfo_all_blocks=1 00:39:22.294 --rc geninfo_unexecuted_blocks=1 00:39:22.294 00:39:22.294 ' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:22.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.294 --rc genhtml_branch_coverage=1 00:39:22.294 --rc genhtml_function_coverage=1 00:39:22.294 --rc genhtml_legend=1 00:39:22.294 --rc geninfo_all_blocks=1 00:39:22.294 --rc geninfo_unexecuted_blocks=1 00:39:22.294 00:39:22.294 ' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:22.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.294 --rc genhtml_branch_coverage=1 00:39:22.294 --rc genhtml_function_coverage=1 00:39:22.294 --rc genhtml_legend=1 00:39:22.294 --rc geninfo_all_blocks=1 00:39:22.294 --rc geninfo_unexecuted_blocks=1 00:39:22.294 00:39:22.294 ' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:39:22.294 1+0 records in 00:39:22.294 1+0 records out 00:39:22.294 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00542851 s, 773 MB/s 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:39:22.294 1+0 records in 00:39:22.294 1+0 records out 00:39:22.294 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00548317 s, 765 MB/s 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:39:22.294 1+0 records in 00:39:22.294 1+0 records out 00:39:22.294 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00410067 s, 1.0 GB/s 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:22.294 ************************************ 00:39:22.294 START TEST dd_sparse_file_to_file 00:39:22.294 ************************************ 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:22.294 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:22.553 [2024-12-09 05:30:09.266817] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:22.553 [2024-12-09 05:30:09.267111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:39:22.553 { 00:39:22.553 "subsystems": [ 00:39:22.553 { 00:39:22.553 "subsystem": "bdev", 00:39:22.553 "config": [ 00:39:22.553 { 00:39:22.553 "params": { 00:39:22.553 "block_size": 4096, 00:39:22.553 "filename": "dd_sparse_aio_disk", 00:39:22.553 "name": "dd_aio" 00:39:22.553 }, 00:39:22.553 "method": "bdev_aio_create" 00:39:22.553 }, 00:39:22.553 { 00:39:22.553 "params": { 00:39:22.553 "lvs_name": "dd_lvstore", 00:39:22.553 "bdev_name": "dd_aio" 00:39:22.553 }, 00:39:22.553 "method": "bdev_lvol_create_lvstore" 00:39:22.553 }, 00:39:22.553 { 00:39:22.553 "method": "bdev_wait_for_examine" 00:39:22.553 } 00:39:22.553 ] 00:39:22.553 } 00:39:22.553 ] 00:39:22.553 } 00:39:22.553 [2024-12-09 05:30:09.423522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.553 [2024-12-09 05:30:09.462462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.553 [2024-12-09 05:30:09.495045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:22.811  [2024-12-09T05:30:10.042Z] Copying: 12/36 [MB] (average 600 MBps) 00:39:23.070 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:23.070 00:39:23.070 real 0m0.619s 00:39:23.070 user 0m0.403s 00:39:23.070 sys 0m0.265s 00:39:23.070 ************************************ 00:39:23.070 END TEST dd_sparse_file_to_file 00:39:23.070 ************************************ 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:23.070 ************************************ 00:39:23.070 START TEST dd_sparse_file_to_bdev 00:39:23.070 ************************************ 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:23.070 05:30:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:23.070 [2024-12-09 05:30:09.929824] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:23.070 [2024-12-09 05:30:09.929935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:39:23.070 { 00:39:23.070 "subsystems": [ 00:39:23.070 { 00:39:23.070 "subsystem": "bdev", 00:39:23.070 "config": [ 00:39:23.070 { 00:39:23.070 "params": { 00:39:23.070 "block_size": 4096, 00:39:23.070 "filename": "dd_sparse_aio_disk", 00:39:23.070 "name": "dd_aio" 00:39:23.070 }, 00:39:23.070 "method": "bdev_aio_create" 00:39:23.070 }, 00:39:23.070 { 00:39:23.070 "params": { 00:39:23.070 "lvs_name": "dd_lvstore", 00:39:23.070 "lvol_name": "dd_lvol", 00:39:23.070 "size_in_mib": 36, 00:39:23.070 "thin_provision": true 00:39:23.070 }, 00:39:23.070 "method": "bdev_lvol_create" 00:39:23.070 }, 00:39:23.070 { 00:39:23.070 "method": "bdev_wait_for_examine" 00:39:23.070 } 00:39:23.070 ] 00:39:23.070 } 00:39:23.070 ] 00:39:23.070 } 00:39:23.328 [2024-12-09 05:30:10.077999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:23.328 [2024-12-09 05:30:10.110962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.328 [2024-12-09 05:30:10.140500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:23.328  [2024-12-09T05:30:10.558Z] Copying: 12/36 [MB] (average 666 MBps) 00:39:23.586 00:39:23.586 ************************************ 00:39:23.586 END TEST dd_sparse_file_to_bdev 00:39:23.586 ************************************ 00:39:23.586 00:39:23.586 real 0m0.538s 00:39:23.586 user 0m0.357s 00:39:23.586 sys 0m0.232s 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:23.586 ************************************ 00:39:23.586 START TEST dd_sparse_bdev_to_file 00:39:23.586 ************************************ 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:23.586 05:30:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:23.586 [2024-12-09 05:30:10.506925] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:23.586 [2024-12-09 05:30:10.507017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61365 ] 00:39:23.586 { 00:39:23.586 "subsystems": [ 00:39:23.586 { 00:39:23.586 "subsystem": "bdev", 00:39:23.586 "config": [ 00:39:23.586 { 00:39:23.586 "params": { 00:39:23.586 "block_size": 4096, 00:39:23.586 "filename": "dd_sparse_aio_disk", 00:39:23.586 "name": "dd_aio" 00:39:23.586 }, 00:39:23.586 "method": "bdev_aio_create" 00:39:23.586 }, 00:39:23.586 { 00:39:23.586 "method": "bdev_wait_for_examine" 00:39:23.586 } 00:39:23.586 ] 00:39:23.586 } 00:39:23.586 ] 00:39:23.586 } 00:39:23.845 [2024-12-09 05:30:10.661376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:23.845 [2024-12-09 05:30:10.700301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.845 [2024-12-09 05:30:10.733402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:23.845  [2024-12-09T05:30:11.076Z] Copying: 12/36 [MB] (average 1000 MBps) 00:39:24.104 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:39:24.104 ************************************ 00:39:24.104 END TEST dd_sparse_bdev_to_file 00:39:24.104 ************************************ 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:24.104 00:39:24.104 real 0m0.570s 00:39:24.104 user 0m0.382s 00:39:24.104 sys 0m0.245s 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:39:24.104 05:30:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:39:24.363 05:30:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:39:24.363 05:30:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:39:24.363 05:30:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:39:24.363 ************************************ 00:39:24.363 END TEST spdk_dd_sparse 00:39:24.363 ************************************ 00:39:24.363 00:39:24.363 real 0m2.097s 00:39:24.363 user 0m1.311s 00:39:24.363 sys 0m0.939s 00:39:24.363 05:30:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.363 05:30:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:24.363 05:30:11 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:24.363 05:30:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.363 05:30:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.363 05:30:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:24.363 ************************************ 00:39:24.363 START TEST spdk_dd_negative 00:39:24.363 ************************************ 00:39:24.363 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:24.363 * Looking for test storage... 00:39:24.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.364 --rc genhtml_branch_coverage=1 00:39:24.364 --rc genhtml_function_coverage=1 00:39:24.364 --rc genhtml_legend=1 00:39:24.364 --rc geninfo_all_blocks=1 00:39:24.364 --rc geninfo_unexecuted_blocks=1 00:39:24.364 00:39:24.364 ' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.364 --rc genhtml_branch_coverage=1 00:39:24.364 --rc genhtml_function_coverage=1 00:39:24.364 --rc genhtml_legend=1 00:39:24.364 --rc geninfo_all_blocks=1 00:39:24.364 --rc geninfo_unexecuted_blocks=1 00:39:24.364 00:39:24.364 ' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.364 --rc genhtml_branch_coverage=1 00:39:24.364 --rc genhtml_function_coverage=1 00:39:24.364 --rc genhtml_legend=1 00:39:24.364 --rc geninfo_all_blocks=1 00:39:24.364 --rc geninfo_unexecuted_blocks=1 00:39:24.364 00:39:24.364 ' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.364 --rc genhtml_branch_coverage=1 00:39:24.364 --rc genhtml_function_coverage=1 00:39:24.364 --rc genhtml_legend=1 00:39:24.364 --rc geninfo_all_blocks=1 00:39:24.364 --rc geninfo_unexecuted_blocks=1 00:39:24.364 00:39:24.364 ' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.364 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:24.624 ************************************ 00:39:24.624 START TEST dd_invalid_arguments 00:39:24.624 ************************************ 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:24.624 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:24.624 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:39:24.624 00:39:24.624 CPU options: 00:39:24.624 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:39:24.624 (like [0,1,10]) 00:39:24.624 --lcores lcore to CPU mapping list. The list is in the format: 00:39:24.624 [<,lcores[@CPUs]>...] 00:39:24.624 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:39:24.624 Within the group, '-' is used for range separator, 00:39:24.624 ',' is used for single number separator. 00:39:24.624 '( )' can be omitted for single element group, 00:39:24.624 '@' can be omitted if cpus and lcores have the same value 00:39:24.624 --disable-cpumask-locks Disable CPU core lock files. 00:39:24.624 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:39:24.624 pollers in the app support interrupt mode) 00:39:24.624 -p, --main-core main (primary) core for DPDK 00:39:24.624 00:39:24.624 Configuration options: 00:39:24.624 -c, --config, --json JSON config file 00:39:24.624 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:39:24.624 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:39:24.624 --wait-for-rpc wait for RPCs to initialize subsystems 00:39:24.624 --rpcs-allowed comma-separated list of permitted RPCS 00:39:24.624 --json-ignore-init-errors don't exit on invalid config entry 00:39:24.624 00:39:24.624 Memory options: 00:39:24.624 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:39:24.624 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:39:24.624 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:39:24.624 -R, --huge-unlink unlink huge files after initialization 00:39:24.624 -n, --mem-channels number of memory channels used for DPDK 00:39:24.624 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:39:24.624 --msg-mempool-size global message memory pool size in count (default: 262143) 00:39:24.624 --no-huge run without using hugepages 00:39:24.624 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:39:24.624 -i, --shm-id shared memory ID (optional) 00:39:24.624 -g, --single-file-segments force creating just one hugetlbfs file 00:39:24.624 00:39:24.624 PCI options: 00:39:24.624 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:39:24.624 -B, --pci-blocked pci addr to block (can be used more than once) 00:39:24.624 -u, --no-pci disable PCI access 00:39:24.624 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:39:24.624 00:39:24.624 Log options: 00:39:24.624 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:39:24.624 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:39:24.624 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:39:24.624 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:39:24.624 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:39:24.624 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:39:24.624 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:39:24.624 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:39:24.625 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:39:24.625 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:39:24.625 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:39:24.625 --silence-noticelog disable notice level logging to stderr 00:39:24.625 00:39:24.625 Trace options: 00:39:24.625 --num-trace-entries number of trace entries for each core, must be power of 2, 00:39:24.625 setting 0 to disable trace (default 32768) 00:39:24.625 Tracepoints vary in size and can use more than one trace entry. 00:39:24.625 -e, --tpoint-group [:] 00:39:24.625 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:39:24.625 [2024-12-09 05:30:11.395393] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:39:24.625 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:39:24.625 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:39:24.625 bdev_raid, scheduler, all). 00:39:24.625 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:39:24.625 a tracepoint group. First tpoint inside a group can be enabled by 00:39:24.625 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:39:24.625 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:39:24.625 in /include/spdk_internal/trace_defs.h 00:39:24.625 00:39:24.625 Other options: 00:39:24.625 -h, --help show this usage 00:39:24.625 -v, --version print SPDK version 00:39:24.625 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:39:24.625 --env-context Opaque context for use of the env implementation 00:39:24.625 00:39:24.625 Application specific: 00:39:24.625 [--------- DD Options ---------] 00:39:24.625 --if Input file. Must specify either --if or --ib. 00:39:24.625 --ib Input bdev. Must specifier either --if or --ib 00:39:24.625 --of Output file. Must specify either --of or --ob. 00:39:24.625 --ob Output bdev. Must specify either --of or --ob. 00:39:24.625 --iflag Input file flags. 00:39:24.625 --oflag Output file flags. 00:39:24.625 --bs I/O unit size (default: 4096) 00:39:24.625 --qd Queue depth (default: 2) 00:39:24.625 --count I/O unit count. The number of I/O units to copy. (default: all) 00:39:24.625 --skip Skip this many I/O units at start of input. (default: 0) 00:39:24.625 --seek Skip this many I/O units at start of output. (default: 0) 00:39:24.625 --aio Force usage of AIO. (by default io_uring is used if available) 00:39:24.625 --sparse Enable hole skipping in input target 00:39:24.625 Available iflag and oflag values: 00:39:24.625 append - append mode 00:39:24.625 direct - use direct I/O for data 00:39:24.625 directory - fail unless a directory 00:39:24.625 dsync - use synchronized I/O for data 00:39:24.625 noatime - do not update access time 00:39:24.625 noctty - do not assign controlling terminal from file 00:39:24.625 nofollow - do not follow symlinks 00:39:24.625 nonblock - use non-blocking I/O 00:39:24.625 sync - use synchronized I/O for data and metadata 00:39:24.625 ************************************ 00:39:24.625 END TEST dd_invalid_arguments 00:39:24.625 ************************************ 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:24.625 00:39:24.625 real 0m0.081s 00:39:24.625 user 0m0.050s 00:39:24.625 sys 0m0.029s 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:24.625 ************************************ 00:39:24.625 START TEST dd_double_input 00:39:24.625 ************************************ 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:24.625 [2024-12-09 05:30:11.518052] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:24.625 00:39:24.625 real 0m0.067s 00:39:24.625 user 0m0.045s 00:39:24.625 sys 0m0.021s 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.625 ************************************ 00:39:24.625 END TEST dd_double_input 00:39:24.625 ************************************ 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:24.625 ************************************ 00:39:24.625 START TEST dd_double_output 00:39:24.625 ************************************ 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:24.625 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:24.885 [2024-12-09 05:30:11.643408] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:24.885 00:39:24.885 real 0m0.079s 00:39:24.885 user 0m0.049s 00:39:24.885 sys 0m0.029s 00:39:24.885 ************************************ 00:39:24.885 END TEST dd_double_output 00:39:24.885 ************************************ 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:24.885 ************************************ 00:39:24.885 START TEST dd_no_input 00:39:24.885 ************************************ 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:24.885 [2024-12-09 05:30:11.765226] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:24.885 00:39:24.885 real 0m0.075s 00:39:24.885 user 0m0.045s 00:39:24.885 sys 0m0.029s 00:39:24.885 ************************************ 00:39:24.885 END TEST dd_no_input 00:39:24.885 ************************************ 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:24.885 ************************************ 00:39:24.885 START TEST dd_no_output 00:39:24.885 ************************************ 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.885 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.886 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.886 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.886 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.886 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:24.886 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:24.886 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:25.145 [2024-12-09 05:30:11.886832] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:39:25.145 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:39:25.145 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:25.145 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:25.145 ************************************ 00:39:25.145 END TEST dd_no_output 00:39:25.145 ************************************ 00:39:25.145 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:25.145 00:39:25.145 real 0m0.067s 00:39:25.145 user 0m0.042s 00:39:25.145 sys 0m0.023s 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:25.146 ************************************ 00:39:25.146 START TEST dd_wrong_blocksize 00:39:25.146 ************************************ 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:25.146 05:30:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:25.146 [2024-12-09 05:30:12.008819] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:25.146 00:39:25.146 real 0m0.073s 00:39:25.146 user 0m0.044s 00:39:25.146 sys 0m0.027s 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:25.146 ************************************ 00:39:25.146 END TEST dd_wrong_blocksize 00:39:25.146 ************************************ 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:25.146 ************************************ 00:39:25.146 START TEST dd_smaller_blocksize 00:39:25.146 ************************************ 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:25.146 05:30:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:25.404 [2024-12-09 05:30:12.137303] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:25.404 [2024-12-09 05:30:12.137400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61592 ] 00:39:25.404 [2024-12-09 05:30:12.291451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.404 [2024-12-09 05:30:12.331632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.404 [2024-12-09 05:30:12.365600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:25.970 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:39:25.970 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:39:25.970 [2024-12-09 05:30:12.928885] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:39:25.970 [2024-12-09 05:30:12.929214] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:26.228 [2024-12-09 05:30:13.000609] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.228 00:39:26.228 real 0m1.052s 00:39:26.228 user 0m0.414s 00:39:26.228 sys 0m0.528s 00:39:26.228 ************************************ 00:39:26.228 END TEST dd_smaller_blocksize 00:39:26.228 ************************************ 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:26.228 ************************************ 00:39:26.228 START TEST dd_invalid_count 00:39:26.228 ************************************ 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:26.228 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:26.486 [2024-12-09 05:30:13.233532] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.486 00:39:26.486 real 0m0.079s 00:39:26.486 user 0m0.050s 00:39:26.486 sys 0m0.028s 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:39:26.486 ************************************ 00:39:26.486 END TEST dd_invalid_count 00:39:26.486 ************************************ 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:26.486 ************************************ 00:39:26.486 START TEST dd_invalid_oflag 00:39:26.486 ************************************ 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:26.486 [2024-12-09 05:30:13.353603] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:39:26.486 ************************************ 00:39:26.486 END TEST dd_invalid_oflag 00:39:26.486 ************************************ 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.486 00:39:26.486 real 0m0.074s 00:39:26.486 user 0m0.042s 00:39:26.486 sys 0m0.031s 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:39:26.486 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:26.487 ************************************ 00:39:26.487 START TEST dd_invalid_iflag 00:39:26.487 ************************************ 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:26.487 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:26.744 [2024-12-09 05:30:13.478215] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.744 00:39:26.744 real 0m0.076s 00:39:26.744 user 0m0.048s 00:39:26.744 sys 0m0.028s 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:39:26.744 ************************************ 00:39:26.744 END TEST dd_invalid_iflag 00:39:26.744 ************************************ 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:26.744 ************************************ 00:39:26.744 START TEST dd_unknown_flag 00:39:26.744 ************************************ 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:39:26.744 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:26.745 05:30:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:26.745 [2024-12-09 05:30:13.611448] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:26.745 [2024-12-09 05:30:13.611602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61689 ] 00:39:27.029 [2024-12-09 05:30:13.759470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.029 [2024-12-09 05:30:13.793145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.029 [2024-12-09 05:30:13.822882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:27.029 [2024-12-09 05:30:13.842974] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:39:27.029 [2024-12-09 05:30:13.843038] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:27.029 [2024-12-09 05:30:13.843094] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:39:27.029 [2024-12-09 05:30:13.843108] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:27.029 [2024-12-09 05:30:13.843356] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:39:27.029 [2024-12-09 05:30:13.843381] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:27.029 [2024-12-09 05:30:13.843459] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:39:27.029 [2024-12-09 05:30:13.843479] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:39:27.029 [2024-12-09 05:30:13.910923] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:27.311 ************************************ 00:39:27.311 END TEST dd_unknown_flag 00:39:27.311 ************************************ 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:27.311 00:39:27.311 real 0m0.481s 00:39:27.311 user 0m0.280s 00:39:27.311 sys 0m0.109s 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:27.311 ************************************ 00:39:27.311 START TEST dd_invalid_json 00:39:27.311 ************************************ 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:27.311 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:27.311 [2024-12-09 05:30:14.136367] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:27.311 [2024-12-09 05:30:14.136677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 00:39:27.568 [2024-12-09 05:30:14.286035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.568 [2024-12-09 05:30:14.335334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.568 [2024-12-09 05:30:14.335619] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:39:27.568 [2024-12-09 05:30:14.335660] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:27.568 [2024-12-09 05:30:14.335676] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:27.568 [2024-12-09 05:30:14.335729] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:27.568 00:39:27.568 real 0m0.379s 00:39:27.568 user 0m0.212s 00:39:27.568 sys 0m0.063s 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:39:27.568 ************************************ 00:39:27.568 END TEST dd_invalid_json 00:39:27.568 ************************************ 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:27.568 ************************************ 00:39:27.568 START TEST dd_invalid_seek 00:39:27.568 ************************************ 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.568 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.569 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.569 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.569 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:27.569 05:30:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:39:27.826 { 00:39:27.826 "subsystems": [ 00:39:27.826 { 00:39:27.826 "subsystem": "bdev", 00:39:27.826 "config": [ 00:39:27.826 { 00:39:27.826 "params": { 00:39:27.826 "block_size": 512, 00:39:27.826 "num_blocks": 512, 00:39:27.826 "name": "malloc0" 00:39:27.826 }, 00:39:27.826 "method": "bdev_malloc_create" 00:39:27.826 }, 00:39:27.826 { 00:39:27.826 "params": { 00:39:27.826 "block_size": 512, 00:39:27.826 "num_blocks": 512, 00:39:27.826 "name": "malloc1" 00:39:27.826 }, 00:39:27.826 "method": "bdev_malloc_create" 00:39:27.826 }, 00:39:27.826 { 00:39:27.826 "method": "bdev_wait_for_examine" 00:39:27.826 } 00:39:27.826 ] 00:39:27.826 } 00:39:27.826 ] 00:39:27.826 } 00:39:27.826 [2024-12-09 05:30:14.583058] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:27.826 [2024-12-09 05:30:14.583153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61747 ] 00:39:27.826 [2024-12-09 05:30:14.735155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.826 [2024-12-09 05:30:14.769855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.083 [2024-12-09 05:30:14.801182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:28.083 [2024-12-09 05:30:14.847690] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:39:28.083 [2024-12-09 05:30:14.847754] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:28.083 [2024-12-09 05:30:14.919063] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:28.083 00:39:28.083 real 0m0.530s 00:39:28.083 user 0m0.391s 00:39:28.083 sys 0m0.116s 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.083 ************************************ 00:39:28.083 END TEST dd_invalid_seek 00:39:28.083 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:39:28.083 ************************************ 00:39:28.341 05:30:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:39:28.341 05:30:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:28.341 05:30:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.341 05:30:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:28.341 ************************************ 00:39:28.341 START TEST dd_invalid_skip 00:39:28.341 ************************************ 00:39:28.341 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:39:28.341 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:28.342 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:39:28.342 { 00:39:28.342 "subsystems": [ 00:39:28.342 { 00:39:28.342 "subsystem": "bdev", 00:39:28.342 "config": [ 00:39:28.342 { 00:39:28.342 "params": { 00:39:28.342 "block_size": 512, 00:39:28.342 "num_blocks": 512, 00:39:28.342 "name": "malloc0" 00:39:28.342 }, 00:39:28.342 "method": "bdev_malloc_create" 00:39:28.342 }, 00:39:28.342 { 00:39:28.342 "params": { 00:39:28.342 "block_size": 512, 00:39:28.342 "num_blocks": 512, 00:39:28.342 "name": "malloc1" 00:39:28.342 }, 00:39:28.342 "method": "bdev_malloc_create" 00:39:28.342 }, 00:39:28.342 { 00:39:28.342 "method": "bdev_wait_for_examine" 00:39:28.342 } 00:39:28.342 ] 00:39:28.342 } 00:39:28.342 ] 00:39:28.342 } 00:39:28.342 [2024-12-09 05:30:15.139628] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:28.342 [2024-12-09 05:30:15.139720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61781 ] 00:39:28.342 [2024-12-09 05:30:15.285900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.599 [2024-12-09 05:30:15.320350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.599 [2024-12-09 05:30:15.351037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:28.599 [2024-12-09 05:30:15.399605] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:39:28.599 [2024-12-09 05:30:15.399689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:28.599 [2024-12-09 05:30:15.479484] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:28.856 00:39:28.856 real 0m0.520s 00:39:28.856 user 0m0.360s 00:39:28.856 sys 0m0.111s 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.856 ************************************ 00:39:28.856 END TEST dd_invalid_skip 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:39:28.856 ************************************ 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:28.856 ************************************ 00:39:28.856 START TEST dd_invalid_input_count 00:39:28.856 ************************************ 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:28.856 05:30:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:39:28.856 [2024-12-09 05:30:15.708023] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:28.856 [2024-12-09 05:30:15.708137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61814 ] 00:39:28.856 { 00:39:28.856 "subsystems": [ 00:39:28.856 { 00:39:28.856 "subsystem": "bdev", 00:39:28.856 "config": [ 00:39:28.856 { 00:39:28.856 "params": { 00:39:28.856 "block_size": 512, 00:39:28.856 "num_blocks": 512, 00:39:28.856 "name": "malloc0" 00:39:28.856 }, 00:39:28.856 "method": "bdev_malloc_create" 00:39:28.856 }, 00:39:28.856 { 00:39:28.856 "params": { 00:39:28.856 "block_size": 512, 00:39:28.856 "num_blocks": 512, 00:39:28.856 "name": "malloc1" 00:39:28.856 }, 00:39:28.856 "method": "bdev_malloc_create" 00:39:28.856 }, 00:39:28.856 { 00:39:28.856 "method": "bdev_wait_for_examine" 00:39:28.856 } 00:39:28.856 ] 00:39:28.856 } 00:39:28.856 ] 00:39:28.856 } 00:39:29.113 [2024-12-09 05:30:15.861664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.113 [2024-12-09 05:30:15.897265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.113 [2024-12-09 05:30:15.929532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:29.113 [2024-12-09 05:30:15.977513] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:39:29.113 [2024-12-09 05:30:15.977609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:29.113 [2024-12-09 05:30:16.049123] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:29.371 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:39:29.371 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:29.371 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:29.372 00:39:29.372 real 0m0.513s 00:39:29.372 user 0m0.358s 00:39:29.372 sys 0m0.118s 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:39:29.372 ************************************ 00:39:29.372 END TEST dd_invalid_input_count 00:39:29.372 ************************************ 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:29.372 ************************************ 00:39:29.372 START TEST dd_invalid_output_count 00:39:29.372 ************************************ 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:29.372 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:39:29.372 [2024-12-09 05:30:16.261183] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:29.372 [2024-12-09 05:30:16.261270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61848 ] 00:39:29.372 { 00:39:29.372 "subsystems": [ 00:39:29.372 { 00:39:29.372 "subsystem": "bdev", 00:39:29.372 "config": [ 00:39:29.372 { 00:39:29.372 "params": { 00:39:29.372 "block_size": 512, 00:39:29.372 "num_blocks": 512, 00:39:29.372 "name": "malloc0" 00:39:29.372 }, 00:39:29.372 "method": "bdev_malloc_create" 00:39:29.372 }, 00:39:29.372 { 00:39:29.372 "method": "bdev_wait_for_examine" 00:39:29.372 } 00:39:29.372 ] 00:39:29.372 } 00:39:29.372 ] 00:39:29.372 } 00:39:29.631 [2024-12-09 05:30:16.415473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.631 [2024-12-09 05:30:16.456545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.631 [2024-12-09 05:30:16.492359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:29.631 [2024-12-09 05:30:16.535043] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:39:29.631 [2024-12-09 05:30:16.535119] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:29.890 [2024-12-09 05:30:16.612057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:29.890 00:39:29.890 real 0m0.531s 00:39:29.890 user 0m0.369s 00:39:29.890 sys 0m0.116s 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:39:29.890 ************************************ 00:39:29.890 END TEST dd_invalid_output_count 00:39:29.890 ************************************ 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:29.890 ************************************ 00:39:29.890 START TEST dd_bs_not_multiple 00:39:29.890 ************************************ 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:29.890 05:30:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:39:30.148 [2024-12-09 05:30:16.881892] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:30.148 [2024-12-09 05:30:16.882050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61879 ] 00:39:30.148 { 00:39:30.148 "subsystems": [ 00:39:30.148 { 00:39:30.148 "subsystem": "bdev", 00:39:30.148 "config": [ 00:39:30.148 { 00:39:30.148 "params": { 00:39:30.148 "block_size": 512, 00:39:30.148 "num_blocks": 512, 00:39:30.148 "name": "malloc0" 00:39:30.148 }, 00:39:30.148 "method": "bdev_malloc_create" 00:39:30.148 }, 00:39:30.148 { 00:39:30.148 "params": { 00:39:30.148 "block_size": 512, 00:39:30.148 "num_blocks": 512, 00:39:30.148 "name": "malloc1" 00:39:30.148 }, 00:39:30.148 "method": "bdev_malloc_create" 00:39:30.148 }, 00:39:30.148 { 00:39:30.148 "method": "bdev_wait_for_examine" 00:39:30.148 } 00:39:30.148 ] 00:39:30.148 } 00:39:30.148 ] 00:39:30.148 } 00:39:30.148 [2024-12-09 05:30:17.040689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.148 [2024-12-09 05:30:17.082888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.148 [2024-12-09 05:30:17.117073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:30.407 [2024-12-09 05:30:17.167102] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:39:30.407 [2024-12-09 05:30:17.167175] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:30.407 [2024-12-09 05:30:17.241433] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:30.407 00:39:30.407 real 0m0.553s 00:39:30.407 user 0m0.410s 00:39:30.407 sys 0m0.126s 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.407 05:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:39:30.407 ************************************ 00:39:30.407 END TEST dd_bs_not_multiple 00:39:30.407 ************************************ 00:39:30.665 00:39:30.665 real 0m6.250s 00:39:30.665 user 0m3.567s 00:39:30.665 sys 0m2.140s 00:39:30.665 05:30:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.665 05:30:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:30.665 ************************************ 00:39:30.665 END TEST spdk_dd_negative 00:39:30.665 ************************************ 00:39:30.665 00:39:30.665 real 1m12.031s 00:39:30.665 user 0m48.269s 00:39:30.665 sys 0m28.314s 00:39:30.665 05:30:17 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.665 05:30:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:30.665 ************************************ 00:39:30.665 END TEST spdk_dd 00:39:30.665 ************************************ 00:39:30.665 05:30:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:39:30.665 05:30:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:30.665 05:30:17 -- common/autotest_common.sh@10 -- # set +x 00:39:30.665 05:30:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:39:30.665 05:30:17 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:39:30.665 05:30:17 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:39:30.665 05:30:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:30.665 05:30:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:30.665 05:30:17 -- common/autotest_common.sh@10 -- # set +x 00:39:30.665 ************************************ 00:39:30.665 START TEST nvmf_tcp 00:39:30.665 ************************************ 00:39:30.665 05:30:17 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:39:30.665 * Looking for test storage... 00:39:30.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:39:30.665 05:30:17 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:30.665 05:30:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:39:30.665 05:30:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:30.924 05:30:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:30.924 05:30:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:30.924 05:30:17 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:30.924 05:30:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.924 --rc genhtml_branch_coverage=1 00:39:30.924 --rc genhtml_function_coverage=1 00:39:30.924 --rc genhtml_legend=1 00:39:30.924 --rc geninfo_all_blocks=1 00:39:30.924 --rc geninfo_unexecuted_blocks=1 00:39:30.924 00:39:30.924 ' 00:39:30.924 05:30:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.924 --rc genhtml_branch_coverage=1 00:39:30.924 --rc genhtml_function_coverage=1 00:39:30.924 --rc genhtml_legend=1 00:39:30.924 --rc geninfo_all_blocks=1 00:39:30.924 --rc geninfo_unexecuted_blocks=1 00:39:30.924 00:39:30.924 ' 00:39:30.924 05:30:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.924 --rc genhtml_branch_coverage=1 00:39:30.924 --rc genhtml_function_coverage=1 00:39:30.924 --rc genhtml_legend=1 00:39:30.924 --rc geninfo_all_blocks=1 00:39:30.924 --rc geninfo_unexecuted_blocks=1 00:39:30.924 00:39:30.924 ' 00:39:30.924 05:30:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:30.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.925 --rc genhtml_branch_coverage=1 00:39:30.925 --rc genhtml_function_coverage=1 00:39:30.925 --rc genhtml_legend=1 00:39:30.925 --rc geninfo_all_blocks=1 00:39:30.925 --rc geninfo_unexecuted_blocks=1 00:39:30.925 00:39:30.925 ' 00:39:30.925 05:30:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:39:30.925 05:30:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:30.925 05:30:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:39:30.925 05:30:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:30.925 05:30:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:30.925 05:30:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:30.925 ************************************ 00:39:30.925 START TEST nvmf_target_core 00:39:30.925 ************************************ 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:39:30.925 * Looking for test storage... 00:39:30.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:30.925 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.185 --rc genhtml_branch_coverage=1 00:39:31.185 --rc genhtml_function_coverage=1 00:39:31.185 --rc genhtml_legend=1 00:39:31.185 --rc geninfo_all_blocks=1 00:39:31.185 --rc geninfo_unexecuted_blocks=1 00:39:31.185 00:39:31.185 ' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.185 --rc genhtml_branch_coverage=1 00:39:31.185 --rc genhtml_function_coverage=1 00:39:31.185 --rc genhtml_legend=1 00:39:31.185 --rc geninfo_all_blocks=1 00:39:31.185 --rc geninfo_unexecuted_blocks=1 00:39:31.185 00:39:31.185 ' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.185 --rc genhtml_branch_coverage=1 00:39:31.185 --rc genhtml_function_coverage=1 00:39:31.185 --rc genhtml_legend=1 00:39:31.185 --rc geninfo_all_blocks=1 00:39:31.185 --rc geninfo_unexecuted_blocks=1 00:39:31.185 00:39:31.185 ' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.185 --rc genhtml_branch_coverage=1 00:39:31.185 --rc genhtml_function_coverage=1 00:39:31.185 --rc genhtml_legend=1 00:39:31.185 --rc geninfo_all_blocks=1 00:39:31.185 --rc geninfo_unexecuted_blocks=1 00:39:31.185 00:39:31.185 ' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:31.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:39:31.185 ************************************ 00:39:31.185 START TEST nvmf_host_management 00:39:31.185 ************************************ 00:39:31.185 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:39:31.185 * Looking for test storage... 00:39:31.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:31.185 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.186 --rc genhtml_branch_coverage=1 00:39:31.186 --rc genhtml_function_coverage=1 00:39:31.186 --rc genhtml_legend=1 00:39:31.186 --rc geninfo_all_blocks=1 00:39:31.186 --rc geninfo_unexecuted_blocks=1 00:39:31.186 00:39:31.186 ' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.186 --rc genhtml_branch_coverage=1 00:39:31.186 --rc genhtml_function_coverage=1 00:39:31.186 --rc genhtml_legend=1 00:39:31.186 --rc geninfo_all_blocks=1 00:39:31.186 --rc geninfo_unexecuted_blocks=1 00:39:31.186 00:39:31.186 ' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.186 --rc genhtml_branch_coverage=1 00:39:31.186 --rc genhtml_function_coverage=1 00:39:31.186 --rc genhtml_legend=1 00:39:31.186 --rc geninfo_all_blocks=1 00:39:31.186 --rc geninfo_unexecuted_blocks=1 00:39:31.186 00:39:31.186 ' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.186 --rc genhtml_branch_coverage=1 00:39:31.186 --rc genhtml_function_coverage=1 00:39:31.186 --rc genhtml_legend=1 00:39:31.186 --rc geninfo_all_blocks=1 00:39:31.186 --rc geninfo_unexecuted_blocks=1 00:39:31.186 00:39:31.186 ' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:31.186 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:31.186 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:31.187 Cannot find device "nvmf_init_br" 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:39:31.187 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:31.446 Cannot find device "nvmf_init_br2" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:31.446 Cannot find device "nvmf_tgt_br" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:31.446 Cannot find device "nvmf_tgt_br2" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:31.446 Cannot find device "nvmf_init_br" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:31.446 Cannot find device "nvmf_init_br2" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:31.446 Cannot find device "nvmf_tgt_br" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:31.446 Cannot find device "nvmf_tgt_br2" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:31.446 Cannot find device "nvmf_br" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:31.446 Cannot find device "nvmf_init_if" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:31.446 Cannot find device "nvmf_init_if2" 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:31.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:31.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:31.446 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:31.705 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:31.705 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:31.705 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:31.705 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:31.705 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:31.705 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:31.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:31.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:39:31.706 00:39:31.706 --- 10.0.0.3 ping statistics --- 00:39:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.706 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:31.706 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:31.706 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:39:31.706 00:39:31.706 --- 10.0.0.4 ping statistics --- 00:39:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.706 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:31.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:31.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:39:31.706 00:39:31.706 --- 10.0.0.1 ping statistics --- 00:39:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.706 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:31.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:31.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:39:31.706 00:39:31.706 --- 10.0.0.2 ping statistics --- 00:39:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.706 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62222 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62222 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62222 ']' 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:31.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:31.706 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:31.965 [2024-12-09 05:30:18.685612] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:31.965 [2024-12-09 05:30:18.685706] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:31.965 [2024-12-09 05:30:18.837359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:31.965 [2024-12-09 05:30:18.882683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:31.965 [2024-12-09 05:30:18.882756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:31.965 [2024-12-09 05:30:18.882770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:31.965 [2024-12-09 05:30:18.882781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:31.965 [2024-12-09 05:30:18.882791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:31.965 [2024-12-09 05:30:18.883728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:31.965 [2024-12-09 05:30:18.883862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:31.965 [2024-12-09 05:30:18.883999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:31.965 [2024-12-09 05:30:18.884005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:31.965 [2024-12-09 05:30:18.919202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:32.223 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:32.223 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:32.223 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:32.223 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.223 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.223 [2024-12-09 05:30:19.015994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.223 Malloc0 00:39:32.223 [2024-12-09 05:30:19.087652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:32.223 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62269 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62269 /var/tmp/bdevperf.sock 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62269 ']' 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:32.224 { 00:39:32.224 "params": { 00:39:32.224 "name": "Nvme$subsystem", 00:39:32.224 "trtype": "$TEST_TRANSPORT", 00:39:32.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:32.224 "adrfam": "ipv4", 00:39:32.224 "trsvcid": "$NVMF_PORT", 00:39:32.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:32.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:32.224 "hdgst": ${hdgst:-false}, 00:39:32.224 "ddgst": ${ddgst:-false} 00:39:32.224 }, 00:39:32.224 "method": "bdev_nvme_attach_controller" 00:39:32.224 } 00:39:32.224 EOF 00:39:32.224 )") 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:32.224 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:32.224 "params": { 00:39:32.224 "name": "Nvme0", 00:39:32.224 "trtype": "tcp", 00:39:32.224 "traddr": "10.0.0.3", 00:39:32.224 "adrfam": "ipv4", 00:39:32.224 "trsvcid": "4420", 00:39:32.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:32.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:32.224 "hdgst": false, 00:39:32.224 "ddgst": false 00:39:32.224 }, 00:39:32.224 "method": "bdev_nvme_attach_controller" 00:39:32.224 }' 00:39:32.224 [2024-12-09 05:30:19.181613] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:32.224 [2024-12-09 05:30:19.181702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:39:32.482 [2024-12-09 05:30:19.327135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:32.482 [2024-12-09 05:30:19.365488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.482 [2024-12-09 05:30:19.406864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:32.741 Running I/O for 10 seconds... 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:39:32.741 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.000 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:33.259 [2024-12-09 05:30:19.973370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.973952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.973971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.259 [2024-12-09 05:30:19.974365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.259 [2024-12-09 05:30:19.974380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.974976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.974993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:33.260 [2024-12-09 05:30:19.975774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.260 [2024-12-09 05:30:19.975790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82d0 is same with the state(6) to be set 00:39:33.261 [2024-12-09 05:30:19.975997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:33.261 [2024-12-09 05:30:19.976041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.261 [2024-12-09 05:30:19.976060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:33.261 [2024-12-09 05:30:19.976075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.261 [2024-12-09 05:30:19.976108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:33.261 [2024-12-09 05:30:19.976127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.261 [2024-12-09 05:30:19.976142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:33.261 [2024-12-09 05:30:19.976157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:33.261 [2024-12-09 05:30:19.976171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcddce0 is same with the state(6) to be set 00:39:33.261 [2024-12-09 05:30:19.977657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:33.261 task offset: 81920 on job bdev=Nvme0n1 fails 00:39:33.261 00:39:33.261 Latency(us) 00:39:33.261 [2024-12-09T05:30:20.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:33.261 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:33.261 Job: Nvme0n1 ended in about 0.47 seconds with error 00:39:33.261 Verification LBA range: start 0x0 length 0x400 00:39:33.261 Nvme0n1 : 0.47 1375.69 85.98 137.57 0.00 40923.85 3455.53 40036.54 00:39:33.261 [2024-12-09T05:30:20.233Z] =================================================================================================================== 00:39:33.261 [2024-12-09T05:30:20.233Z] Total : 1375.69 85.98 137.57 0.00 40923.85 3455.53 40036.54 00:39:33.261 [2024-12-09 05:30:19.980216] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:33.261 [2024-12-09 05:30:19.980275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcddce0 (9): Bad file descriptor 00:39:33.261 [2024-12-09 05:30:19.990654] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62269 00:39:34.195 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62269) - No such process 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:34.195 { 00:39:34.195 "params": { 00:39:34.195 "name": "Nvme$subsystem", 00:39:34.195 "trtype": "$TEST_TRANSPORT", 00:39:34.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:34.195 "adrfam": "ipv4", 00:39:34.195 "trsvcid": "$NVMF_PORT", 00:39:34.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:34.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:34.195 "hdgst": ${hdgst:-false}, 00:39:34.195 "ddgst": ${ddgst:-false} 00:39:34.195 }, 00:39:34.195 "method": "bdev_nvme_attach_controller" 00:39:34.195 } 00:39:34.195 EOF 00:39:34.195 )") 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:34.195 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:34.195 "params": { 00:39:34.195 "name": "Nvme0", 00:39:34.195 "trtype": "tcp", 00:39:34.195 "traddr": "10.0.0.3", 00:39:34.195 "adrfam": "ipv4", 00:39:34.195 "trsvcid": "4420", 00:39:34.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:34.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:34.195 "hdgst": false, 00:39:34.195 "ddgst": false 00:39:34.195 }, 00:39:34.195 "method": "bdev_nvme_attach_controller" 00:39:34.195 }' 00:39:34.195 [2024-12-09 05:30:21.027146] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:34.195 [2024-12-09 05:30:21.027255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:39:34.453 [2024-12-09 05:30:21.185391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.453 [2024-12-09 05:30:21.225171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.453 [2024-12-09 05:30:21.267114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:34.453 Running I/O for 1 seconds... 00:39:35.857 1408.00 IOPS, 88.00 MiB/s 00:39:35.857 Latency(us) 00:39:35.857 [2024-12-09T05:30:22.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.857 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:35.857 Verification LBA range: start 0x0 length 0x400 00:39:35.857 Nvme0n1 : 1.02 1438.70 89.92 0.00 0.00 43494.07 4259.84 43372.92 00:39:35.857 [2024-12-09T05:30:22.829Z] =================================================================================================================== 00:39:35.857 [2024-12-09T05:30:22.829Z] Total : 1438.70 89.92 0.00 0.00 43494.07 4259.84 43372.92 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:35.857 rmmod nvme_tcp 00:39:35.857 rmmod nvme_fabrics 00:39:35.857 rmmod nvme_keyring 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62222 ']' 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62222 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62222 ']' 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62222 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62222 00:39:35.857 killing process with pid 62222 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62222' 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62222 00:39:35.857 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62222 00:39:36.114 [2024-12-09 05:30:22.932998] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:36.114 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:36.114 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:36.372 00:39:36.372 real 0m5.266s 00:39:36.372 user 0m18.453s 00:39:36.372 sys 0m1.448s 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:36.372 ************************************ 00:39:36.372 END TEST nvmf_host_management 00:39:36.372 ************************************ 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:39:36.372 ************************************ 00:39:36.372 START TEST nvmf_lvol 00:39:36.372 ************************************ 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:39:36.372 * Looking for test storage... 00:39:36.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:39:36.372 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.631 --rc genhtml_branch_coverage=1 00:39:36.631 --rc genhtml_function_coverage=1 00:39:36.631 --rc genhtml_legend=1 00:39:36.631 --rc geninfo_all_blocks=1 00:39:36.631 --rc geninfo_unexecuted_blocks=1 00:39:36.631 00:39:36.631 ' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.631 --rc genhtml_branch_coverage=1 00:39:36.631 --rc genhtml_function_coverage=1 00:39:36.631 --rc genhtml_legend=1 00:39:36.631 --rc geninfo_all_blocks=1 00:39:36.631 --rc geninfo_unexecuted_blocks=1 00:39:36.631 00:39:36.631 ' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.631 --rc genhtml_branch_coverage=1 00:39:36.631 --rc genhtml_function_coverage=1 00:39:36.631 --rc genhtml_legend=1 00:39:36.631 --rc geninfo_all_blocks=1 00:39:36.631 --rc geninfo_unexecuted_blocks=1 00:39:36.631 00:39:36.631 ' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.631 --rc genhtml_branch_coverage=1 00:39:36.631 --rc genhtml_function_coverage=1 00:39:36.631 --rc genhtml_legend=1 00:39:36.631 --rc geninfo_all_blocks=1 00:39:36.631 --rc geninfo_unexecuted_blocks=1 00:39:36.631 00:39:36.631 ' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.631 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:36.632 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:36.632 Cannot find device "nvmf_init_br" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:36.632 Cannot find device "nvmf_init_br2" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:36.632 Cannot find device "nvmf_tgt_br" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:36.632 Cannot find device "nvmf_tgt_br2" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:36.632 Cannot find device "nvmf_init_br" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:36.632 Cannot find device "nvmf_init_br2" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:36.632 Cannot find device "nvmf_tgt_br" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:36.632 Cannot find device "nvmf_tgt_br2" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:36.632 Cannot find device "nvmf_br" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:36.632 Cannot find device "nvmf_init_if" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:36.632 Cannot find device "nvmf_init_if2" 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:36.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:36.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:36.632 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:36.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:36.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:39:36.891 00:39:36.891 --- 10.0.0.3 ping statistics --- 00:39:36.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.891 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:36.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:36.891 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:39:36.891 00:39:36.891 --- 10.0.0.4 ping statistics --- 00:39:36.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.891 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:36.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:36.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:39:36.891 00:39:36.891 --- 10.0.0.1 ping statistics --- 00:39:36.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.891 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:36.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:36.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:39:36.891 00:39:36.891 --- 10.0.0.2 ping statistics --- 00:39:36.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.891 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:36.891 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62571 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62571 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62571 ']' 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.149 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:37.149 [2024-12-09 05:30:23.962782] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:37.149 [2024-12-09 05:30:23.962916] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.407 [2024-12-09 05:30:24.246789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:37.407 [2024-12-09 05:30:24.289589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.407 [2024-12-09 05:30:24.289654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.407 [2024-12-09 05:30:24.289666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.407 [2024-12-09 05:30:24.289675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.407 [2024-12-09 05:30:24.289682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.407 [2024-12-09 05:30:24.290549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.407 [2024-12-09 05:30:24.290481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.407 [2024-12-09 05:30:24.290548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:37.407 [2024-12-09 05:30:24.320447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:37.972 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:37.972 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:37.972 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:37.972 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:37.972 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:38.230 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.231 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:38.489 [2024-12-09 05:30:25.222286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.489 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:38.747 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:38.747 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:39.004 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:39.004 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:39.261 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:39.519 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=baec6176-69c2-4a35-802c-14fab3f893af 00:39:39.519 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u baec6176-69c2-4a35-802c-14fab3f893af lvol 20 00:39:39.776 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9b49cfa6-befc-462f-979f-36343db81624 00:39:39.776 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:40.342 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b49cfa6-befc-462f-979f-36343db81624 00:39:40.600 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:40.859 [2024-12-09 05:30:27.704464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:40.859 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:39:41.118 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62648 00:39:41.118 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:41.118 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:42.497 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 9b49cfa6-befc-462f-979f-36343db81624 MY_SNAPSHOT 00:39:42.498 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2bcdc63d-1d9e-4355-9f3c-74c519cc5c7c 00:39:42.498 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 9b49cfa6-befc-462f-979f-36343db81624 30 00:39:42.756 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2bcdc63d-1d9e-4355-9f3c-74c519cc5c7c MY_CLONE 00:39:43.015 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=26eee884-85fc-4abd-937d-14e81855bd9d 00:39:43.015 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 26eee884-85fc-4abd-937d-14e81855bd9d 00:39:43.583 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62648 00:39:51.757 Initializing NVMe Controllers 00:39:51.757 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:39:51.757 Controller IO queue size 128, less than required. 00:39:51.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:51.757 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:51.757 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:51.757 Initialization complete. Launching workers. 00:39:51.757 ======================================================== 00:39:51.757 Latency(us) 00:39:51.757 Device Information : IOPS MiB/s Average min max 00:39:51.757 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10127.90 39.56 12637.36 3063.08 82578.18 00:39:51.757 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10052.70 39.27 12733.95 3173.22 52524.18 00:39:51.757 ======================================================== 00:39:51.757 Total : 20180.60 78.83 12685.47 3063.08 82578.18 00:39:51.757 00:39:51.757 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:51.757 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9b49cfa6-befc-462f-979f-36343db81624 00:39:52.322 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u baec6176-69c2-4a35-802c-14fab3f893af 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:52.580 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:52.580 rmmod nvme_tcp 00:39:52.580 rmmod nvme_fabrics 00:39:52.580 rmmod nvme_keyring 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62571 ']' 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62571 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62571 ']' 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62571 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62571 00:39:52.581 killing process with pid 62571 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62571' 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62571 00:39:52.581 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62571 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:52.839 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:39:53.098 00:39:53.098 real 0m16.647s 00:39:53.098 user 1m7.908s 00:39:53.098 sys 0m4.152s 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:53.098 ************************************ 00:39:53.098 END TEST nvmf_lvol 00:39:53.098 ************************************ 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:39:53.098 ************************************ 00:39:53.098 START TEST nvmf_lvs_grow 00:39:53.098 ************************************ 00:39:53.098 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:39:53.098 * Looking for test storage... 00:39:53.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:53.098 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:53.098 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:39:53.099 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:53.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.358 --rc genhtml_branch_coverage=1 00:39:53.358 --rc genhtml_function_coverage=1 00:39:53.358 --rc genhtml_legend=1 00:39:53.358 --rc geninfo_all_blocks=1 00:39:53.358 --rc geninfo_unexecuted_blocks=1 00:39:53.358 00:39:53.358 ' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:53.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.358 --rc genhtml_branch_coverage=1 00:39:53.358 --rc genhtml_function_coverage=1 00:39:53.358 --rc genhtml_legend=1 00:39:53.358 --rc geninfo_all_blocks=1 00:39:53.358 --rc geninfo_unexecuted_blocks=1 00:39:53.358 00:39:53.358 ' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:53.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.358 --rc genhtml_branch_coverage=1 00:39:53.358 --rc genhtml_function_coverage=1 00:39:53.358 --rc genhtml_legend=1 00:39:53.358 --rc geninfo_all_blocks=1 00:39:53.358 --rc geninfo_unexecuted_blocks=1 00:39:53.358 00:39:53.358 ' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:53.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.358 --rc genhtml_branch_coverage=1 00:39:53.358 --rc genhtml_function_coverage=1 00:39:53.358 --rc genhtml_legend=1 00:39:53.358 --rc geninfo_all_blocks=1 00:39:53.358 --rc geninfo_unexecuted_blocks=1 00:39:53.358 00:39:53.358 ' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.358 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:53.359 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:53.359 Cannot find device "nvmf_init_br" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:53.359 Cannot find device "nvmf_init_br2" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:53.359 Cannot find device "nvmf_tgt_br" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:53.359 Cannot find device "nvmf_tgt_br2" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:53.359 Cannot find device "nvmf_init_br" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:53.359 Cannot find device "nvmf_init_br2" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:53.359 Cannot find device "nvmf_tgt_br" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:53.359 Cannot find device "nvmf_tgt_br2" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:53.359 Cannot find device "nvmf_br" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:53.359 Cannot find device "nvmf_init_if" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:53.359 Cannot find device "nvmf_init_if2" 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:53.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:53.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:53.359 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:53.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:53.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:39:53.617 00:39:53.617 --- 10.0.0.3 ping statistics --- 00:39:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.617 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:53.617 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:53.617 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:39:53.617 00:39:53.617 --- 10.0.0.4 ping statistics --- 00:39:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.617 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:53.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:53.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:39:53.617 00:39:53.617 --- 10.0.0.1 ping statistics --- 00:39:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.617 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:53.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:53.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:39:53.617 00:39:53.617 --- 10.0.0.2 ping statistics --- 00:39:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.617 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63032 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63032 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63032 ']' 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:53.617 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.618 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:53.618 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:53.875 [2024-12-09 05:30:40.640113] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:53.875 [2024-12-09 05:30:40.640256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:53.875 [2024-12-09 05:30:40.804467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.875 [2024-12-09 05:30:40.845530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:53.875 [2024-12-09 05:30:40.845607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.133 [2024-12-09 05:30:40.845622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.134 [2024-12-09 05:30:40.845635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.134 [2024-12-09 05:30:40.845648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.134 [2024-12-09 05:30:40.846051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.134 [2024-12-09 05:30:40.879910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:54.134 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:54.392 [2024-12-09 05:30:41.307859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:54.392 ************************************ 00:39:54.392 START TEST lvs_grow_clean 00:39:54.392 ************************************ 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:54.392 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:54.958 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:54.958 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:55.216 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:39:55.216 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:39:55.216 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:55.474 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:55.474 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:55.474 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f lvol 150 00:39:55.731 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7471f99a-0307-4e7d-9049-b872c6d77d88 00:39:55.731 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:55.731 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:55.988 [2024-12-09 05:30:42.956545] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:55.988 [2024-12-09 05:30:42.956688] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:56.246 true 00:39:56.246 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:39:56.246 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:56.505 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:56.505 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:56.773 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7471f99a-0307-4e7d-9049-b872c6d77d88 00:39:57.062 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:57.319 [2024-12-09 05:30:44.057225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:57.319 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63112 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63112 /var/tmp/bdevperf.sock 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63112 ']' 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:57.576 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:57.577 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:57.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:57.577 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:57.577 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:57.577 [2024-12-09 05:30:44.451150] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:57.577 [2024-12-09 05:30:44.451237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63112 ] 00:39:57.834 [2024-12-09 05:30:44.605760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.834 [2024-12-09 05:30:44.644848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.834 [2024-12-09 05:30:44.678695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:57.834 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.834 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:57.834 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:58.091 Nvme0n1 00:39:58.091 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:58.346 [ 00:39:58.347 { 00:39:58.347 "name": "Nvme0n1", 00:39:58.347 "aliases": [ 00:39:58.347 "7471f99a-0307-4e7d-9049-b872c6d77d88" 00:39:58.347 ], 00:39:58.347 "product_name": "NVMe disk", 00:39:58.347 "block_size": 4096, 00:39:58.347 "num_blocks": 38912, 00:39:58.347 "uuid": "7471f99a-0307-4e7d-9049-b872c6d77d88", 00:39:58.347 "numa_id": -1, 00:39:58.347 "assigned_rate_limits": { 00:39:58.347 "rw_ios_per_sec": 0, 00:39:58.347 "rw_mbytes_per_sec": 0, 00:39:58.347 "r_mbytes_per_sec": 0, 00:39:58.347 "w_mbytes_per_sec": 0 00:39:58.347 }, 00:39:58.347 "claimed": false, 00:39:58.347 "zoned": false, 00:39:58.347 "supported_io_types": { 00:39:58.347 "read": true, 00:39:58.347 "write": true, 00:39:58.347 "unmap": true, 00:39:58.347 "flush": true, 00:39:58.347 "reset": true, 00:39:58.347 "nvme_admin": true, 00:39:58.347 "nvme_io": true, 00:39:58.347 "nvme_io_md": false, 00:39:58.347 "write_zeroes": true, 00:39:58.347 "zcopy": false, 00:39:58.347 "get_zone_info": false, 00:39:58.347 "zone_management": false, 00:39:58.347 "zone_append": false, 00:39:58.347 "compare": true, 00:39:58.347 "compare_and_write": true, 00:39:58.347 "abort": true, 00:39:58.347 "seek_hole": false, 00:39:58.347 "seek_data": false, 00:39:58.347 "copy": true, 00:39:58.347 "nvme_iov_md": false 00:39:58.347 }, 00:39:58.347 "memory_domains": [ 00:39:58.347 { 00:39:58.347 "dma_device_id": "system", 00:39:58.347 "dma_device_type": 1 00:39:58.347 } 00:39:58.347 ], 00:39:58.347 "driver_specific": { 00:39:58.347 "nvme": [ 00:39:58.347 { 00:39:58.347 "trid": { 00:39:58.347 "trtype": "TCP", 00:39:58.347 "adrfam": "IPv4", 00:39:58.347 "traddr": "10.0.0.3", 00:39:58.347 "trsvcid": "4420", 00:39:58.347 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:58.347 }, 00:39:58.347 "ctrlr_data": { 00:39:58.347 "cntlid": 1, 00:39:58.347 "vendor_id": "0x8086", 00:39:58.347 "model_number": "SPDK bdev Controller", 00:39:58.347 "serial_number": "SPDK0", 00:39:58.347 "firmware_revision": "25.01", 00:39:58.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.347 "oacs": { 00:39:58.347 "security": 0, 00:39:58.347 "format": 0, 00:39:58.347 "firmware": 0, 00:39:58.347 "ns_manage": 0 00:39:58.347 }, 00:39:58.347 "multi_ctrlr": true, 00:39:58.347 "ana_reporting": false 00:39:58.347 }, 00:39:58.347 "vs": { 00:39:58.347 "nvme_version": "1.3" 00:39:58.347 }, 00:39:58.347 "ns_data": { 00:39:58.347 "id": 1, 00:39:58.347 "can_share": true 00:39:58.347 } 00:39:58.347 } 00:39:58.347 ], 00:39:58.347 "mp_policy": "active_passive" 00:39:58.347 } 00:39:58.347 } 00:39:58.347 ] 00:39:58.347 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63128 00:39:58.347 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:58.347 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:58.603 Running I/O for 10 seconds... 00:39:59.534 Latency(us) 00:39:59.534 [2024-12-09T05:30:46.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.534 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:39:59.534 [2024-12-09T05:30:46.506Z] =================================================================================================================== 00:39:59.534 [2024-12-09T05:30:46.506Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:39:59.534 00:40:00.488 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:00.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.488 Nvme0n1 : 2.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:40:00.488 [2024-12-09T05:30:47.460Z] =================================================================================================================== 00:40:00.488 [2024-12-09T05:30:47.460Z] Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:40:00.488 00:40:00.746 true 00:40:00.746 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:00.746 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:01.004 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:01.004 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:01.004 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63128 00:40:01.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:01.570 Nvme0n1 : 3.00 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:40:01.570 [2024-12-09T05:30:48.542Z] =================================================================================================================== 00:40:01.570 [2024-12-09T05:30:48.542Z] Total : 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:40:01.570 00:40:02.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:02.504 Nvme0n1 : 4.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:40:02.504 [2024-12-09T05:30:49.476Z] =================================================================================================================== 00:40:02.504 [2024-12-09T05:30:49.476Z] Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:40:02.504 00:40:03.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.880 Nvme0n1 : 5.00 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:40:03.880 [2024-12-09T05:30:50.852Z] =================================================================================================================== 00:40:03.880 [2024-12-09T05:30:50.852Z] Total : 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:40:03.880 00:40:04.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.817 Nvme0n1 : 6.00 6948.00 27.14 0.00 0.00 0.00 0.00 0.00 00:40:04.817 [2024-12-09T05:30:51.789Z] =================================================================================================================== 00:40:04.817 [2024-12-09T05:30:51.789Z] Total : 6948.00 27.14 0.00 0.00 0.00 0.00 0.00 00:40:04.817 00:40:05.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.753 Nvme0n1 : 7.00 6935.14 27.09 0.00 0.00 0.00 0.00 0.00 00:40:05.753 [2024-12-09T05:30:52.725Z] =================================================================================================================== 00:40:05.753 [2024-12-09T05:30:52.725Z] Total : 6935.14 27.09 0.00 0.00 0.00 0.00 0.00 00:40:05.753 00:40:06.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.685 Nvme0n1 : 8.00 6925.50 27.05 0.00 0.00 0.00 0.00 0.00 00:40:06.685 [2024-12-09T05:30:53.657Z] =================================================================================================================== 00:40:06.685 [2024-12-09T05:30:53.657Z] Total : 6925.50 27.05 0.00 0.00 0.00 0.00 0.00 00:40:06.685 00:40:07.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.619 Nvme0n1 : 9.00 6918.00 27.02 0.00 0.00 0.00 0.00 0.00 00:40:07.619 [2024-12-09T05:30:54.591Z] =================================================================================================================== 00:40:07.619 [2024-12-09T05:30:54.591Z] Total : 6918.00 27.02 0.00 0.00 0.00 0.00 0.00 00:40:07.619 00:40:08.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.553 Nvme0n1 : 10.00 6899.30 26.95 0.00 0.00 0.00 0.00 0.00 00:40:08.553 [2024-12-09T05:30:55.525Z] =================================================================================================================== 00:40:08.553 [2024-12-09T05:30:55.525Z] Total : 6899.30 26.95 0.00 0.00 0.00 0.00 0.00 00:40:08.553 00:40:08.553 00:40:08.553 Latency(us) 00:40:08.553 [2024-12-09T05:30:55.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.553 Nvme0n1 : 10.00 6908.71 26.99 0.00 0.00 18521.08 12511.42 115343.36 00:40:08.553 [2024-12-09T05:30:55.525Z] =================================================================================================================== 00:40:08.553 [2024-12-09T05:30:55.525Z] Total : 6908.71 26.99 0.00 0.00 18521.08 12511.42 115343.36 00:40:08.553 { 00:40:08.553 "results": [ 00:40:08.553 { 00:40:08.553 "job": "Nvme0n1", 00:40:08.553 "core_mask": "0x2", 00:40:08.553 "workload": "randwrite", 00:40:08.553 "status": "finished", 00:40:08.553 "queue_depth": 128, 00:40:08.553 "io_size": 4096, 00:40:08.553 "runtime": 10.0049, 00:40:08.553 "iops": 6908.714729782407, 00:40:08.553 "mibps": 26.987166913212526, 00:40:08.553 "io_failed": 0, 00:40:08.553 "io_timeout": 0, 00:40:08.553 "avg_latency_us": 18521.082940140543, 00:40:08.553 "min_latency_us": 12511.418181818182, 00:40:08.553 "max_latency_us": 115343.36 00:40:08.553 } 00:40:08.553 ], 00:40:08.553 "core_count": 1 00:40:08.553 } 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63112 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63112 ']' 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63112 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63112 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:08.553 killing process with pid 63112 00:40:08.553 Received shutdown signal, test time was about 10.000000 seconds 00:40:08.553 00:40:08.553 Latency(us) 00:40:08.553 [2024-12-09T05:30:55.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.553 [2024-12-09T05:30:55.525Z] =================================================================================================================== 00:40:08.553 [2024-12-09T05:30:55.525Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63112' 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63112 00:40:08.553 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63112 00:40:08.810 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:40:09.067 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:09.325 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:09.325 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:09.584 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:09.584 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:09.584 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:09.841 [2024-12-09 05:30:56.783185] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:10.100 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:10.100 request: 00:40:10.100 { 00:40:10.100 "uuid": "8db1a575-0f4f-46e1-84ed-9321a3aefb6f", 00:40:10.100 "method": "bdev_lvol_get_lvstores", 00:40:10.100 "req_id": 1 00:40:10.100 } 00:40:10.100 Got JSON-RPC error response 00:40:10.100 response: 00:40:10.100 { 00:40:10.100 "code": -19, 00:40:10.100 "message": "No such device" 00:40:10.100 } 00:40:10.358 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:10.358 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:10.358 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:10.358 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:10.358 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:10.616 aio_bdev 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7471f99a-0307-4e7d-9049-b872c6d77d88 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7471f99a-0307-4e7d-9049-b872c6d77d88 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:10.617 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:10.875 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7471f99a-0307-4e7d-9049-b872c6d77d88 -t 2000 00:40:11.134 [ 00:40:11.134 { 00:40:11.134 "name": "7471f99a-0307-4e7d-9049-b872c6d77d88", 00:40:11.134 "aliases": [ 00:40:11.134 "lvs/lvol" 00:40:11.134 ], 00:40:11.134 "product_name": "Logical Volume", 00:40:11.134 "block_size": 4096, 00:40:11.134 "num_blocks": 38912, 00:40:11.134 "uuid": "7471f99a-0307-4e7d-9049-b872c6d77d88", 00:40:11.134 "assigned_rate_limits": { 00:40:11.134 "rw_ios_per_sec": 0, 00:40:11.134 "rw_mbytes_per_sec": 0, 00:40:11.134 "r_mbytes_per_sec": 0, 00:40:11.134 "w_mbytes_per_sec": 0 00:40:11.134 }, 00:40:11.134 "claimed": false, 00:40:11.134 "zoned": false, 00:40:11.134 "supported_io_types": { 00:40:11.134 "read": true, 00:40:11.134 "write": true, 00:40:11.134 "unmap": true, 00:40:11.134 "flush": false, 00:40:11.134 "reset": true, 00:40:11.134 "nvme_admin": false, 00:40:11.134 "nvme_io": false, 00:40:11.134 "nvme_io_md": false, 00:40:11.134 "write_zeroes": true, 00:40:11.134 "zcopy": false, 00:40:11.134 "get_zone_info": false, 00:40:11.134 "zone_management": false, 00:40:11.134 "zone_append": false, 00:40:11.134 "compare": false, 00:40:11.134 "compare_and_write": false, 00:40:11.134 "abort": false, 00:40:11.134 "seek_hole": true, 00:40:11.134 "seek_data": true, 00:40:11.134 "copy": false, 00:40:11.134 "nvme_iov_md": false 00:40:11.134 }, 00:40:11.134 "driver_specific": { 00:40:11.134 "lvol": { 00:40:11.134 "lvol_store_uuid": "8db1a575-0f4f-46e1-84ed-9321a3aefb6f", 00:40:11.134 "base_bdev": "aio_bdev", 00:40:11.134 "thin_provision": false, 00:40:11.134 "num_allocated_clusters": 38, 00:40:11.134 "snapshot": false, 00:40:11.134 "clone": false, 00:40:11.134 "esnap_clone": false 00:40:11.134 } 00:40:11.134 } 00:40:11.134 } 00:40:11.134 ] 00:40:11.134 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:11.134 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:11.134 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:11.393 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:11.393 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:11.393 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:11.652 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:11.652 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7471f99a-0307-4e7d-9049-b872c6d77d88 00:40:11.910 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8db1a575-0f4f-46e1-84ed-9321a3aefb6f 00:40:12.187 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:12.511 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:12.770 ************************************ 00:40:12.770 END TEST lvs_grow_clean 00:40:12.770 ************************************ 00:40:12.770 00:40:12.770 real 0m18.328s 00:40:12.770 user 0m17.359s 00:40:12.770 sys 0m2.370s 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:12.770 ************************************ 00:40:12.770 START TEST lvs_grow_dirty 00:40:12.770 ************************************ 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:12.770 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:13.358 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:13.358 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:13.358 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:13.358 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:13.358 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:13.617 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:13.617 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:13.617 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 lvol 150 00:40:14.184 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:14.184 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:14.184 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:14.442 [2024-12-09 05:31:01.170404] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:14.442 [2024-12-09 05:31:01.170662] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:14.442 true 00:40:14.442 05:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:14.442 05:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:14.700 05:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:14.700 05:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:14.959 05:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:15.218 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:40:15.785 [2024-12-09 05:31:02.467104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:15.785 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:40:15.785 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:15.785 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63380 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63380 /var/tmp/bdevperf.sock 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63380 ']' 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:16.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:16.044 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:16.044 [2024-12-09 05:31:02.797028] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:16.044 [2024-12-09 05:31:02.797274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63380 ] 00:40:16.044 [2024-12-09 05:31:02.946663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.044 [2024-12-09 05:31:02.980146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.044 [2024-12-09 05:31:03.010407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:16.302 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.302 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:16.302 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:16.559 Nvme0n1 00:40:16.559 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:16.816 [ 00:40:16.816 { 00:40:16.816 "name": "Nvme0n1", 00:40:16.816 "aliases": [ 00:40:16.816 "6b8d8c13-4a07-4efe-8bbf-12626277e344" 00:40:16.816 ], 00:40:16.816 "product_name": "NVMe disk", 00:40:16.816 "block_size": 4096, 00:40:16.816 "num_blocks": 38912, 00:40:16.816 "uuid": "6b8d8c13-4a07-4efe-8bbf-12626277e344", 00:40:16.816 "numa_id": -1, 00:40:16.816 "assigned_rate_limits": { 00:40:16.816 "rw_ios_per_sec": 0, 00:40:16.816 "rw_mbytes_per_sec": 0, 00:40:16.816 "r_mbytes_per_sec": 0, 00:40:16.816 "w_mbytes_per_sec": 0 00:40:16.816 }, 00:40:16.816 "claimed": false, 00:40:16.816 "zoned": false, 00:40:16.817 "supported_io_types": { 00:40:16.817 "read": true, 00:40:16.817 "write": true, 00:40:16.817 "unmap": true, 00:40:16.817 "flush": true, 00:40:16.817 "reset": true, 00:40:16.817 "nvme_admin": true, 00:40:16.817 "nvme_io": true, 00:40:16.817 "nvme_io_md": false, 00:40:16.817 "write_zeroes": true, 00:40:16.817 "zcopy": false, 00:40:16.817 "get_zone_info": false, 00:40:16.817 "zone_management": false, 00:40:16.817 "zone_append": false, 00:40:16.817 "compare": true, 00:40:16.817 "compare_and_write": true, 00:40:16.817 "abort": true, 00:40:16.817 "seek_hole": false, 00:40:16.817 "seek_data": false, 00:40:16.817 "copy": true, 00:40:16.817 "nvme_iov_md": false 00:40:16.817 }, 00:40:16.817 "memory_domains": [ 00:40:16.817 { 00:40:16.817 "dma_device_id": "system", 00:40:16.817 "dma_device_type": 1 00:40:16.817 } 00:40:16.817 ], 00:40:16.817 "driver_specific": { 00:40:16.817 "nvme": [ 00:40:16.817 { 00:40:16.817 "trid": { 00:40:16.817 "trtype": "TCP", 00:40:16.817 "adrfam": "IPv4", 00:40:16.817 "traddr": "10.0.0.3", 00:40:16.817 "trsvcid": "4420", 00:40:16.817 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:16.817 }, 00:40:16.817 "ctrlr_data": { 00:40:16.817 "cntlid": 1, 00:40:16.817 "vendor_id": "0x8086", 00:40:16.817 "model_number": "SPDK bdev Controller", 00:40:16.817 "serial_number": "SPDK0", 00:40:16.817 "firmware_revision": "25.01", 00:40:16.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:16.817 "oacs": { 00:40:16.817 "security": 0, 00:40:16.817 "format": 0, 00:40:16.817 "firmware": 0, 00:40:16.817 "ns_manage": 0 00:40:16.817 }, 00:40:16.817 "multi_ctrlr": true, 00:40:16.817 "ana_reporting": false 00:40:16.817 }, 00:40:16.817 "vs": { 00:40:16.817 "nvme_version": "1.3" 00:40:16.817 }, 00:40:16.817 "ns_data": { 00:40:16.817 "id": 1, 00:40:16.817 "can_share": true 00:40:16.817 } 00:40:16.817 } 00:40:16.817 ], 00:40:16.817 "mp_policy": "active_passive" 00:40:16.817 } 00:40:16.817 } 00:40:16.817 ] 00:40:16.817 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63396 00:40:16.817 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:16.817 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:17.074 Running I/O for 10 seconds... 00:40:18.005 Latency(us) 00:40:18.005 [2024-12-09T05:31:04.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.005 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:40:18.005 [2024-12-09T05:31:04.977Z] =================================================================================================================== 00:40:18.005 [2024-12-09T05:31:04.977Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:40:18.005 00:40:18.972 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:18.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.972 Nvme0n1 : 2.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:40:18.972 [2024-12-09T05:31:05.944Z] =================================================================================================================== 00:40:18.972 [2024-12-09T05:31:05.944Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:40:18.972 00:40:19.229 true 00:40:19.229 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:19.229 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:19.486 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:19.486 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:19.486 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63396 00:40:20.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.051 Nvme0n1 : 3.00 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:40:20.051 [2024-12-09T05:31:07.023Z] =================================================================================================================== 00:40:20.051 [2024-12-09T05:31:07.023Z] Total : 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:40:20.051 00:40:20.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.986 Nvme0n1 : 4.00 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:40:20.986 [2024-12-09T05:31:07.958Z] =================================================================================================================== 00:40:20.986 [2024-12-09T05:31:07.958Z] Total : 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:40:20.986 00:40:21.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:21.920 Nvme0n1 : 5.00 7162.80 27.98 0.00 0.00 0.00 0.00 0.00 00:40:21.920 [2024-12-09T05:31:08.892Z] =================================================================================================================== 00:40:21.920 [2024-12-09T05:31:08.892Z] Total : 7162.80 27.98 0.00 0.00 0.00 0.00 0.00 00:40:21.920 00:40:23.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:23.296 Nvme0n1 : 6.00 7133.17 27.86 0.00 0.00 0.00 0.00 0.00 00:40:23.296 [2024-12-09T05:31:10.268Z] =================================================================================================================== 00:40:23.296 [2024-12-09T05:31:10.268Z] Total : 7133.17 27.86 0.00 0.00 0.00 0.00 0.00 00:40:23.296 00:40:24.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:24.243 Nvme0n1 : 7.00 7091.00 27.70 0.00 0.00 0.00 0.00 0.00 00:40:24.243 [2024-12-09T05:31:11.215Z] =================================================================================================================== 00:40:24.243 [2024-12-09T05:31:11.215Z] Total : 7091.00 27.70 0.00 0.00 0.00 0.00 0.00 00:40:24.243 00:40:25.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:25.202 Nvme0n1 : 8.00 6901.88 26.96 0.00 0.00 0.00 0.00 0.00 00:40:25.202 [2024-12-09T05:31:12.174Z] =================================================================================================================== 00:40:25.202 [2024-12-09T05:31:12.174Z] Total : 6901.88 26.96 0.00 0.00 0.00 0.00 0.00 00:40:25.202 00:40:26.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:26.136 Nvme0n1 : 9.00 6854.67 26.78 0.00 0.00 0.00 0.00 0.00 00:40:26.136 [2024-12-09T05:31:13.108Z] =================================================================================================================== 00:40:26.136 [2024-12-09T05:31:13.108Z] Total : 6854.67 26.78 0.00 0.00 0.00 0.00 0.00 00:40:26.136 00:40:27.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:27.070 Nvme0n1 : 10.00 6842.30 26.73 0.00 0.00 0.00 0.00 0.00 00:40:27.070 [2024-12-09T05:31:14.042Z] =================================================================================================================== 00:40:27.070 [2024-12-09T05:31:14.042Z] Total : 6842.30 26.73 0.00 0.00 0.00 0.00 0.00 00:40:27.070 00:40:27.070 00:40:27.070 Latency(us) 00:40:27.070 [2024-12-09T05:31:14.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:27.070 Nvme0n1 : 10.02 6844.56 26.74 0.00 0.00 18695.67 11558.17 175398.17 00:40:27.070 [2024-12-09T05:31:14.042Z] =================================================================================================================== 00:40:27.070 [2024-12-09T05:31:14.042Z] Total : 6844.56 26.74 0.00 0.00 18695.67 11558.17 175398.17 00:40:27.070 { 00:40:27.070 "results": [ 00:40:27.070 { 00:40:27.070 "job": "Nvme0n1", 00:40:27.070 "core_mask": "0x2", 00:40:27.070 "workload": "randwrite", 00:40:27.070 "status": "finished", 00:40:27.070 "queue_depth": 128, 00:40:27.070 "io_size": 4096, 00:40:27.070 "runtime": 10.015405, 00:40:27.070 "iops": 6844.555961541246, 00:40:27.070 "mibps": 26.73654672477049, 00:40:27.070 "io_failed": 0, 00:40:27.070 "io_timeout": 0, 00:40:27.070 "avg_latency_us": 18695.670130878007, 00:40:27.070 "min_latency_us": 11558.167272727273, 00:40:27.070 "max_latency_us": 175398.16727272727 00:40:27.070 } 00:40:27.070 ], 00:40:27.070 "core_count": 1 00:40:27.070 } 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63380 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63380 ']' 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63380 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63380 00:40:27.070 killing process with pid 63380 00:40:27.070 Received shutdown signal, test time was about 10.000000 seconds 00:40:27.070 00:40:27.070 Latency(us) 00:40:27.070 [2024-12-09T05:31:14.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.070 [2024-12-09T05:31:14.042Z] =================================================================================================================== 00:40:27.070 [2024-12-09T05:31:14.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63380' 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63380 00:40:27.070 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63380 00:40:27.328 05:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:40:27.586 05:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:27.843 05:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:27.843 05:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:28.101 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:28.101 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:28.101 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63032 00:40:28.101 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63032 00:40:28.358 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63032 Killed "${NVMF_APP[@]}" "$@" 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:28.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63536 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63536 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63536 ']' 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.358 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:28.358 [2024-12-09 05:31:15.155110] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:28.358 [2024-12-09 05:31:15.156189] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.358 [2024-12-09 05:31:15.311208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.616 [2024-12-09 05:31:15.344361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:28.616 [2024-12-09 05:31:15.344419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:28.616 [2024-12-09 05:31:15.344431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.616 [2024-12-09 05:31:15.344439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.616 [2024-12-09 05:31:15.344446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:28.616 [2024-12-09 05:31:15.344763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.616 [2024-12-09 05:31:15.374929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:28.616 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:28.873 [2024-12-09 05:31:15.790219] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:28.873 [2024-12-09 05:31:15.790633] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:28.873 [2024-12-09 05:31:15.790833] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:28.873 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:29.437 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b8d8c13-4a07-4efe-8bbf-12626277e344 -t 2000 00:40:29.694 [ 00:40:29.694 { 00:40:29.694 "name": "6b8d8c13-4a07-4efe-8bbf-12626277e344", 00:40:29.694 "aliases": [ 00:40:29.694 "lvs/lvol" 00:40:29.694 ], 00:40:29.694 "product_name": "Logical Volume", 00:40:29.694 "block_size": 4096, 00:40:29.694 "num_blocks": 38912, 00:40:29.694 "uuid": "6b8d8c13-4a07-4efe-8bbf-12626277e344", 00:40:29.694 "assigned_rate_limits": { 00:40:29.694 "rw_ios_per_sec": 0, 00:40:29.694 "rw_mbytes_per_sec": 0, 00:40:29.694 "r_mbytes_per_sec": 0, 00:40:29.694 "w_mbytes_per_sec": 0 00:40:29.694 }, 00:40:29.694 "claimed": false, 00:40:29.694 "zoned": false, 00:40:29.694 "supported_io_types": { 00:40:29.694 "read": true, 00:40:29.694 "write": true, 00:40:29.694 "unmap": true, 00:40:29.694 "flush": false, 00:40:29.694 "reset": true, 00:40:29.694 "nvme_admin": false, 00:40:29.694 "nvme_io": false, 00:40:29.694 "nvme_io_md": false, 00:40:29.694 "write_zeroes": true, 00:40:29.694 "zcopy": false, 00:40:29.694 "get_zone_info": false, 00:40:29.694 "zone_management": false, 00:40:29.694 "zone_append": false, 00:40:29.694 "compare": false, 00:40:29.694 "compare_and_write": false, 00:40:29.694 "abort": false, 00:40:29.694 "seek_hole": true, 00:40:29.694 "seek_data": true, 00:40:29.694 "copy": false, 00:40:29.694 "nvme_iov_md": false 00:40:29.694 }, 00:40:29.694 "driver_specific": { 00:40:29.694 "lvol": { 00:40:29.694 "lvol_store_uuid": "30ff1f32-4fa7-4916-99a4-d71537ad3266", 00:40:29.694 "base_bdev": "aio_bdev", 00:40:29.694 "thin_provision": false, 00:40:29.694 "num_allocated_clusters": 38, 00:40:29.694 "snapshot": false, 00:40:29.694 "clone": false, 00:40:29.694 "esnap_clone": false 00:40:29.694 } 00:40:29.694 } 00:40:29.694 } 00:40:29.694 ] 00:40:29.694 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:29.694 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:29.694 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:29.951 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:29.951 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:29.951 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:30.208 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:30.208 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:30.464 [2024-12-09 05:31:17.343989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:30.464 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:31.030 request: 00:40:31.030 { 00:40:31.030 "uuid": "30ff1f32-4fa7-4916-99a4-d71537ad3266", 00:40:31.030 "method": "bdev_lvol_get_lvstores", 00:40:31.030 "req_id": 1 00:40:31.030 } 00:40:31.030 Got JSON-RPC error response 00:40:31.030 response: 00:40:31.030 { 00:40:31.030 "code": -19, 00:40:31.030 "message": "No such device" 00:40:31.030 } 00:40:31.030 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:31.030 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:31.030 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:31.030 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:31.030 05:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:31.289 aio_bdev 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:31.289 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:31.546 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b8d8c13-4a07-4efe-8bbf-12626277e344 -t 2000 00:40:31.805 [ 00:40:31.805 { 00:40:31.805 "name": "6b8d8c13-4a07-4efe-8bbf-12626277e344", 00:40:31.805 "aliases": [ 00:40:31.805 "lvs/lvol" 00:40:31.805 ], 00:40:31.805 "product_name": "Logical Volume", 00:40:31.805 "block_size": 4096, 00:40:31.805 "num_blocks": 38912, 00:40:31.805 "uuid": "6b8d8c13-4a07-4efe-8bbf-12626277e344", 00:40:31.805 "assigned_rate_limits": { 00:40:31.805 "rw_ios_per_sec": 0, 00:40:31.805 "rw_mbytes_per_sec": 0, 00:40:31.805 "r_mbytes_per_sec": 0, 00:40:31.805 "w_mbytes_per_sec": 0 00:40:31.805 }, 00:40:31.805 "claimed": false, 00:40:31.805 "zoned": false, 00:40:31.805 "supported_io_types": { 00:40:31.805 "read": true, 00:40:31.805 "write": true, 00:40:31.805 "unmap": true, 00:40:31.805 "flush": false, 00:40:31.805 "reset": true, 00:40:31.805 "nvme_admin": false, 00:40:31.805 "nvme_io": false, 00:40:31.805 "nvme_io_md": false, 00:40:31.805 "write_zeroes": true, 00:40:31.805 "zcopy": false, 00:40:31.805 "get_zone_info": false, 00:40:31.805 "zone_management": false, 00:40:31.805 "zone_append": false, 00:40:31.805 "compare": false, 00:40:31.805 "compare_and_write": false, 00:40:31.805 "abort": false, 00:40:31.805 "seek_hole": true, 00:40:31.805 "seek_data": true, 00:40:31.805 "copy": false, 00:40:31.805 "nvme_iov_md": false 00:40:31.805 }, 00:40:31.805 "driver_specific": { 00:40:31.805 "lvol": { 00:40:31.805 "lvol_store_uuid": "30ff1f32-4fa7-4916-99a4-d71537ad3266", 00:40:31.805 "base_bdev": "aio_bdev", 00:40:31.805 "thin_provision": false, 00:40:31.805 "num_allocated_clusters": 38, 00:40:31.805 "snapshot": false, 00:40:31.805 "clone": false, 00:40:31.805 "esnap_clone": false 00:40:31.805 } 00:40:31.805 } 00:40:31.805 } 00:40:31.805 ] 00:40:31.805 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:31.805 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:31.805 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:32.064 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:32.064 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:32.064 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:32.344 05:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:32.344 05:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6b8d8c13-4a07-4efe-8bbf-12626277e344 00:40:32.650 05:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30ff1f32-4fa7-4916-99a4-d71537ad3266 00:40:32.923 05:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:33.182 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:33.749 ************************************ 00:40:33.749 END TEST lvs_grow_dirty 00:40:33.749 ************************************ 00:40:33.749 00:40:33.749 real 0m20.818s 00:40:33.749 user 0m43.131s 00:40:33.749 sys 0m8.045s 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:33.749 nvmf_trace.0 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:33.749 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:34.316 rmmod nvme_tcp 00:40:34.316 rmmod nvme_fabrics 00:40:34.316 rmmod nvme_keyring 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63536 ']' 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63536 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63536 ']' 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63536 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63536 00:40:34.316 killing process with pid 63536 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63536' 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63536 00:40:34.316 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63536 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.574 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:40:34.833 ************************************ 00:40:34.833 END TEST nvmf_lvs_grow 00:40:34.833 ************************************ 00:40:34.833 00:40:34.833 real 0m41.606s 00:40:34.833 user 1m7.423s 00:40:34.833 sys 0m11.373s 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:40:34.833 ************************************ 00:40:34.833 START TEST nvmf_bdev_io_wait 00:40:34.833 ************************************ 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:40:34.833 * Looking for test storage... 00:40:34.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:34.833 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:34.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.834 --rc genhtml_branch_coverage=1 00:40:34.834 --rc genhtml_function_coverage=1 00:40:34.834 --rc genhtml_legend=1 00:40:34.834 --rc geninfo_all_blocks=1 00:40:34.834 --rc geninfo_unexecuted_blocks=1 00:40:34.834 00:40:34.834 ' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:34.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.834 --rc genhtml_branch_coverage=1 00:40:34.834 --rc genhtml_function_coverage=1 00:40:34.834 --rc genhtml_legend=1 00:40:34.834 --rc geninfo_all_blocks=1 00:40:34.834 --rc geninfo_unexecuted_blocks=1 00:40:34.834 00:40:34.834 ' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:34.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.834 --rc genhtml_branch_coverage=1 00:40:34.834 --rc genhtml_function_coverage=1 00:40:34.834 --rc genhtml_legend=1 00:40:34.834 --rc geninfo_all_blocks=1 00:40:34.834 --rc geninfo_unexecuted_blocks=1 00:40:34.834 00:40:34.834 ' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:34.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.834 --rc genhtml_branch_coverage=1 00:40:34.834 --rc genhtml_function_coverage=1 00:40:34.834 --rc genhtml_legend=1 00:40:34.834 --rc geninfo_all_blocks=1 00:40:34.834 --rc geninfo_unexecuted_blocks=1 00:40:34.834 00:40:34.834 ' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:34.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:40:34.834 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:35.093 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:35.094 Cannot find device "nvmf_init_br" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:35.094 Cannot find device "nvmf_init_br2" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:35.094 Cannot find device "nvmf_tgt_br" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:35.094 Cannot find device "nvmf_tgt_br2" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:35.094 Cannot find device "nvmf_init_br" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:35.094 Cannot find device "nvmf_init_br2" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:35.094 Cannot find device "nvmf_tgt_br" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:35.094 Cannot find device "nvmf_tgt_br2" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:35.094 Cannot find device "nvmf_br" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:35.094 Cannot find device "nvmf_init_if" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:35.094 Cannot find device "nvmf_init_if2" 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:35.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:35.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:35.094 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:35.094 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:35.094 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:35.353 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:35.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:35.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:40:35.354 00:40:35.354 --- 10.0.0.3 ping statistics --- 00:40:35.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.354 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:35.354 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:35.354 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:40:35.354 00:40:35.354 --- 10.0.0.4 ping statistics --- 00:40:35.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.354 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:35.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:35.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:40:35.354 00:40:35.354 --- 10.0.0.1 ping statistics --- 00:40:35.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.354 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:35.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:35.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:40:35.354 00:40:35.354 --- 10.0.0.2 ping statistics --- 00:40:35.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.354 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63901 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63901 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63901 ']' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.354 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.354 [2024-12-09 05:31:22.305472] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:35.354 [2024-12-09 05:31:22.305761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.613 [2024-12-09 05:31:22.454174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:35.613 [2024-12-09 05:31:22.485125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.613 [2024-12-09 05:31:22.485377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.613 [2024-12-09 05:31:22.485552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.613 [2024-12-09 05:31:22.485684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.613 [2024-12-09 05:31:22.485724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.613 [2024-12-09 05:31:22.486588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:35.613 [2024-12-09 05:31:22.486680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:35.613 [2024-12-09 05:31:22.486755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.613 [2024-12-09 05:31:22.486755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:35.613 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:35.613 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:35.613 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:35.613 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:35.613 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.872 [2024-12-09 05:31:22.668393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.872 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.872 [2024-12-09 05:31:22.683698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.873 Malloc0 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.873 [2024-12-09 05:31:22.735468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63934 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63936 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.873 { 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme$subsystem", 00:40:35.873 "trtype": "$TEST_TRANSPORT", 00:40:35.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "$NVMF_PORT", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.873 "hdgst": ${hdgst:-false}, 00:40:35.873 "ddgst": ${ddgst:-false} 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 } 00:40:35.873 EOF 00:40:35.873 )") 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63938 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.873 { 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme$subsystem", 00:40:35.873 "trtype": "$TEST_TRANSPORT", 00:40:35.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "$NVMF_PORT", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.873 "hdgst": ${hdgst:-false}, 00:40:35.873 "ddgst": ${ddgst:-false} 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 } 00:40:35.873 EOF 00:40:35.873 )") 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63941 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.873 { 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme$subsystem", 00:40:35.873 "trtype": "$TEST_TRANSPORT", 00:40:35.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "$NVMF_PORT", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.873 "hdgst": ${hdgst:-false}, 00:40:35.873 "ddgst": ${ddgst:-false} 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 } 00:40:35.873 EOF 00:40:35.873 )") 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.873 { 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme$subsystem", 00:40:35.873 "trtype": "$TEST_TRANSPORT", 00:40:35.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "$NVMF_PORT", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.873 "hdgst": ${hdgst:-false}, 00:40:35.873 "ddgst": ${ddgst:-false} 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 } 00:40:35.873 EOF 00:40:35.873 )") 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme1", 00:40:35.873 "trtype": "tcp", 00:40:35.873 "traddr": "10.0.0.3", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "4420", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.873 "hdgst": false, 00:40:35.873 "ddgst": false 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 }' 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme1", 00:40:35.873 "trtype": "tcp", 00:40:35.873 "traddr": "10.0.0.3", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "4420", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.873 "hdgst": false, 00:40:35.873 "ddgst": false 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 }' 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.873 "params": { 00:40:35.873 "name": "Nvme1", 00:40:35.873 "trtype": "tcp", 00:40:35.873 "traddr": "10.0.0.3", 00:40:35.873 "adrfam": "ipv4", 00:40:35.873 "trsvcid": "4420", 00:40:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.873 "hdgst": false, 00:40:35.873 "ddgst": false 00:40:35.873 }, 00:40:35.873 "method": "bdev_nvme_attach_controller" 00:40:35.873 }' 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.873 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.874 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.874 "params": { 00:40:35.874 "name": "Nvme1", 00:40:35.874 "trtype": "tcp", 00:40:35.874 "traddr": "10.0.0.3", 00:40:35.874 "adrfam": "ipv4", 00:40:35.874 "trsvcid": "4420", 00:40:35.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.874 "hdgst": false, 00:40:35.874 "ddgst": false 00:40:35.874 }, 00:40:35.874 "method": "bdev_nvme_attach_controller" 00:40:35.874 }' 00:40:35.874 [2024-12-09 05:31:22.802455] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:35.874 [2024-12-09 05:31:22.802487] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:35.874 [2024-12-09 05:31:22.802552] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 05:31:22.802552] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --fil.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:35.874 e-prefix=spdk3 --proc-type=auto ] 00:40:35.874 [2024-12-09 05:31:22.802709] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:35.874 [2024-12-09 05:31:22.802771] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:35.874 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63934 00:40:35.874 [2024-12-09 05:31:22.823267] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:35.874 [2024-12-09 05:31:22.823476] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:36.132 [2024-12-09 05:31:23.004057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.132 [2024-12-09 05:31:23.035713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:36.132 [2024-12-09 05:31:23.049463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.132 [2024-12-09 05:31:23.049494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:36.132 [2024-12-09 05:31:23.080921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:36.132 [2024-12-09 05:31:23.094715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:36.132 [2024-12-09 05:31:23.097372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.391 [2024-12-09 05:31:23.128722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:36.391 [2024-12-09 05:31:23.142714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:36.391 [2024-12-09 05:31:23.144383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.391 Running I/O for 1 seconds... 00:40:36.391 [2024-12-09 05:31:23.175272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:36.391 [2024-12-09 05:31:23.189023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:36.391 Running I/O for 1 seconds... 00:40:36.391 Running I/O for 1 seconds... 00:40:36.391 Running I/O for 1 seconds... 00:40:37.327 6250.00 IOPS, 24.41 MiB/s 00:40:37.327 Latency(us) 00:40:37.327 [2024-12-09T05:31:24.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.327 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:37.327 Nvme1n1 : 1.02 6252.41 24.42 0.00 0.00 20282.24 9294.20 36461.85 00:40:37.327 [2024-12-09T05:31:24.299Z] =================================================================================================================== 00:40:37.327 [2024-12-09T05:31:24.299Z] Total : 6252.41 24.42 0.00 0.00 20282.24 9294.20 36461.85 00:40:37.327 8010.00 IOPS, 31.29 MiB/s 00:40:37.327 Latency(us) 00:40:37.327 [2024-12-09T05:31:24.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.327 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:37.327 Nvme1n1 : 1.01 8039.30 31.40 0.00 0.00 15815.05 8638.84 24188.74 00:40:37.327 [2024-12-09T05:31:24.299Z] =================================================================================================================== 00:40:37.327 [2024-12-09T05:31:24.299Z] Total : 8039.30 31.40 0.00 0.00 15815.05 8638.84 24188.74 00:40:37.327 157272.00 IOPS, 614.34 MiB/s 00:40:37.327 Latency(us) 00:40:37.327 [2024-12-09T05:31:24.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.327 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:37.327 Nvme1n1 : 1.00 156894.67 612.87 0.00 0.00 811.30 392.84 2353.34 00:40:37.327 [2024-12-09T05:31:24.299Z] =================================================================================================================== 00:40:37.327 [2024-12-09T05:31:24.299Z] Total : 156894.67 612.87 0.00 0.00 811.30 392.84 2353.34 00:40:37.586 6082.00 IOPS, 23.76 MiB/s 00:40:37.586 Latency(us) 00:40:37.586 [2024-12-09T05:31:24.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.586 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:37.586 Nvme1n1 : 1.01 6200.23 24.22 0.00 0.00 20569.46 5808.87 47900.86 00:40:37.586 [2024-12-09T05:31:24.558Z] =================================================================================================================== 00:40:37.586 [2024-12-09T05:31:24.558Z] Total : 6200.23 24.22 0.00 0.00 20569.46 5808.87 47900.86 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63936 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63938 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63941 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:37.586 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:37.586 rmmod nvme_tcp 00:40:37.586 rmmod nvme_fabrics 00:40:37.845 rmmod nvme_keyring 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63901 ']' 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63901 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63901 ']' 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63901 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63901 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:37.845 killing process with pid 63901 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63901' 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63901 00:40:37.845 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63901 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:38.102 05:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:38.102 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:38.102 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:38.102 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:38.102 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.102 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.102 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:40:38.360 00:40:38.360 real 0m3.475s 00:40:38.360 user 0m13.772s 00:40:38.360 sys 0m1.933s 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:38.360 ************************************ 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:38.360 END TEST nvmf_bdev_io_wait 00:40:38.360 ************************************ 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:40:38.360 ************************************ 00:40:38.360 START TEST nvmf_queue_depth 00:40:38.360 ************************************ 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:40:38.360 * Looking for test storage... 00:40:38.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.360 --rc genhtml_branch_coverage=1 00:40:38.360 --rc genhtml_function_coverage=1 00:40:38.360 --rc genhtml_legend=1 00:40:38.360 --rc geninfo_all_blocks=1 00:40:38.360 --rc geninfo_unexecuted_blocks=1 00:40:38.360 00:40:38.360 ' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.360 --rc genhtml_branch_coverage=1 00:40:38.360 --rc genhtml_function_coverage=1 00:40:38.360 --rc genhtml_legend=1 00:40:38.360 --rc geninfo_all_blocks=1 00:40:38.360 --rc geninfo_unexecuted_blocks=1 00:40:38.360 00:40:38.360 ' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.360 --rc genhtml_branch_coverage=1 00:40:38.360 --rc genhtml_function_coverage=1 00:40:38.360 --rc genhtml_legend=1 00:40:38.360 --rc geninfo_all_blocks=1 00:40:38.360 --rc geninfo_unexecuted_blocks=1 00:40:38.360 00:40:38.360 ' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.360 --rc genhtml_branch_coverage=1 00:40:38.360 --rc genhtml_function_coverage=1 00:40:38.360 --rc genhtml_legend=1 00:40:38.360 --rc geninfo_all_blocks=1 00:40:38.360 --rc geninfo_unexecuted_blocks=1 00:40:38.360 00:40:38.360 ' 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.360 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:38.617 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:38.617 Cannot find device "nvmf_init_br" 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:38.617 Cannot find device "nvmf_init_br2" 00:40:38.617 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:38.618 Cannot find device "nvmf_tgt_br" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:38.618 Cannot find device "nvmf_tgt_br2" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:38.618 Cannot find device "nvmf_init_br" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:38.618 Cannot find device "nvmf_init_br2" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:38.618 Cannot find device "nvmf_tgt_br" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:38.618 Cannot find device "nvmf_tgt_br2" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:38.618 Cannot find device "nvmf_br" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:38.618 Cannot find device "nvmf_init_if" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:38.618 Cannot find device "nvmf_init_if2" 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:38.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:38.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:38.618 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:38.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:38.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:40:38.875 00:40:38.875 --- 10.0.0.3 ping statistics --- 00:40:38.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.875 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:38.875 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:38.875 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:40:38.875 00:40:38.875 --- 10.0.0.4 ping statistics --- 00:40:38.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.875 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:38.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:40:38.875 00:40:38.875 --- 10.0.0.1 ping statistics --- 00:40:38.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.875 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:38.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:40:38.875 00:40:38.875 --- 10.0.0.2 ping statistics --- 00:40:38.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.875 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64200 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64200 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64200 ']' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:38.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:38.875 05:31:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.133 [2024-12-09 05:31:25.849748] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:39.133 [2024-12-09 05:31:25.850091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:39.133 [2024-12-09 05:31:26.014147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.133 [2024-12-09 05:31:26.052582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:39.133 [2024-12-09 05:31:26.052654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:39.133 [2024-12-09 05:31:26.052666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:39.133 [2024-12-09 05:31:26.052676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:39.133 [2024-12-09 05:31:26.052685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:39.133 [2024-12-09 05:31:26.053041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:39.133 [2024-12-09 05:31:26.088420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 [2024-12-09 05:31:26.189357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 Malloc0 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 [2024-12-09 05:31:26.233790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64219 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64219 /var/tmp/bdevperf.sock 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64219 ']' 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:39.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:39.391 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.391 [2024-12-09 05:31:26.296613] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:39.391 [2024-12-09 05:31:26.296711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64219 ] 00:40:39.649 [2024-12-09 05:31:26.453967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.649 [2024-12-09 05:31:26.492842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.649 [2024-12-09 05:31:26.526458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:39.649 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:39.649 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:39.649 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:39.649 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.649 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.907 NVMe0n1 00:40:39.907 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.907 05:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:39.907 Running I/O for 10 seconds... 00:40:42.255 6964.00 IOPS, 27.20 MiB/s [2024-12-09T05:31:29.793Z] 7194.00 IOPS, 28.10 MiB/s [2024-12-09T05:31:31.170Z] 7192.67 IOPS, 28.10 MiB/s [2024-12-09T05:31:32.107Z] 7191.75 IOPS, 28.09 MiB/s [2024-12-09T05:31:33.044Z] 7245.20 IOPS, 28.30 MiB/s [2024-12-09T05:31:33.980Z] 7474.33 IOPS, 29.20 MiB/s [2024-12-09T05:31:34.916Z] 7495.14 IOPS, 29.28 MiB/s [2024-12-09T05:31:35.853Z] 7493.00 IOPS, 29.27 MiB/s [2024-12-09T05:31:37.226Z] 7519.33 IOPS, 29.37 MiB/s [2024-12-09T05:31:37.226Z] 7559.70 IOPS, 29.53 MiB/s 00:40:50.254 Latency(us) 00:40:50.254 [2024-12-09T05:31:37.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:50.254 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:50.254 Verification LBA range: start 0x0 length 0x4000 00:40:50.254 NVMe0n1 : 10.11 7577.84 29.60 0.00 0.00 134346.95 26929.34 96754.97 00:40:50.254 [2024-12-09T05:31:37.226Z] =================================================================================================================== 00:40:50.254 [2024-12-09T05:31:37.226Z] Total : 7577.84 29.60 0.00 0.00 134346.95 26929.34 96754.97 00:40:50.254 { 00:40:50.254 "results": [ 00:40:50.254 { 00:40:50.254 "job": "NVMe0n1", 00:40:50.254 "core_mask": "0x1", 00:40:50.254 "workload": "verify", 00:40:50.254 "status": "finished", 00:40:50.254 "verify_range": { 00:40:50.254 "start": 0, 00:40:50.254 "length": 16384 00:40:50.254 }, 00:40:50.254 "queue_depth": 1024, 00:40:50.254 "io_size": 4096, 00:40:50.254 "runtime": 10.111187, 00:40:50.254 "iops": 7577.844223432917, 00:40:50.254 "mibps": 29.60095399778483, 00:40:50.254 "io_failed": 0, 00:40:50.254 "io_timeout": 0, 00:40:50.254 "avg_latency_us": 134346.94860611437, 00:40:50.254 "min_latency_us": 26929.33818181818, 00:40:50.254 "max_latency_us": 96754.96727272727 00:40:50.254 } 00:40:50.254 ], 00:40:50.254 "core_count": 1 00:40:50.254 } 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64219 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64219 ']' 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64219 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64219 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:50.254 killing process with pid 64219 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64219' 00:40:50.254 Received shutdown signal, test time was about 10.000000 seconds 00:40:50.254 00:40:50.254 Latency(us) 00:40:50.254 [2024-12-09T05:31:37.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:50.254 [2024-12-09T05:31:37.226Z] =================================================================================================================== 00:40:50.254 [2024-12-09T05:31:37.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64219 00:40:50.254 05:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64219 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:50.254 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:50.254 rmmod nvme_tcp 00:40:50.513 rmmod nvme_fabrics 00:40:50.513 rmmod nvme_keyring 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64200 ']' 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64200 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64200 ']' 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64200 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64200 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:50.513 killing process with pid 64200 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64200' 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64200 00:40:50.513 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64200 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.772 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:40:51.071 00:40:51.071 real 0m12.635s 00:40:51.071 user 0m21.373s 00:40:51.071 sys 0m2.291s 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:51.071 ************************************ 00:40:51.071 END TEST nvmf_queue_depth 00:40:51.071 ************************************ 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:40:51.071 ************************************ 00:40:51.071 START TEST nvmf_target_multipath 00:40:51.071 ************************************ 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:40:51.071 * Looking for test storage... 00:40:51.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:51.071 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:51.071 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:51.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.072 --rc genhtml_branch_coverage=1 00:40:51.072 --rc genhtml_function_coverage=1 00:40:51.072 --rc genhtml_legend=1 00:40:51.072 --rc geninfo_all_blocks=1 00:40:51.072 --rc geninfo_unexecuted_blocks=1 00:40:51.072 00:40:51.072 ' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:51.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.072 --rc genhtml_branch_coverage=1 00:40:51.072 --rc genhtml_function_coverage=1 00:40:51.072 --rc genhtml_legend=1 00:40:51.072 --rc geninfo_all_blocks=1 00:40:51.072 --rc geninfo_unexecuted_blocks=1 00:40:51.072 00:40:51.072 ' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:51.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.072 --rc genhtml_branch_coverage=1 00:40:51.072 --rc genhtml_function_coverage=1 00:40:51.072 --rc genhtml_legend=1 00:40:51.072 --rc geninfo_all_blocks=1 00:40:51.072 --rc geninfo_unexecuted_blocks=1 00:40:51.072 00:40:51.072 ' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:51.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.072 --rc genhtml_branch_coverage=1 00:40:51.072 --rc genhtml_function_coverage=1 00:40:51.072 --rc genhtml_legend=1 00:40:51.072 --rc geninfo_all_blocks=1 00:40:51.072 --rc geninfo_unexecuted_blocks=1 00:40:51.072 00:40:51.072 ' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:51.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:51.072 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:51.330 Cannot find device "nvmf_init_br" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:51.330 Cannot find device "nvmf_init_br2" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:51.330 Cannot find device "nvmf_tgt_br" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:51.330 Cannot find device "nvmf_tgt_br2" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:51.330 Cannot find device "nvmf_init_br" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:51.330 Cannot find device "nvmf_init_br2" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:51.330 Cannot find device "nvmf_tgt_br" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:51.330 Cannot find device "nvmf_tgt_br2" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:51.330 Cannot find device "nvmf_br" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:51.330 Cannot find device "nvmf_init_if" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:51.330 Cannot find device "nvmf_init_if2" 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:51.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:51.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:51.330 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:51.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:51.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:40:51.588 00:40:51.588 --- 10.0.0.3 ping statistics --- 00:40:51.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.588 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:51.588 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:51.588 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:40:51.588 00:40:51.588 --- 10.0.0.4 ping statistics --- 00:40:51.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.588 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:51.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:51.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:40:51.588 00:40:51.588 --- 10.0.0.1 ping statistics --- 00:40:51.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.588 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:51.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:51.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:40:51.588 00:40:51.588 --- 10.0.0.2 ping statistics --- 00:40:51.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.588 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64596 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64596 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64596 ']' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.588 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:51.588 [2024-12-09 05:31:38.538984] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:51.588 [2024-12-09 05:31:38.539092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:51.846 [2024-12-09 05:31:38.697525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:51.846 [2024-12-09 05:31:38.741539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:51.846 [2024-12-09 05:31:38.741631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:51.846 [2024-12-09 05:31:38.741657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:51.846 [2024-12-09 05:31:38.741669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:51.846 [2024-12-09 05:31:38.741677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:51.846 [2024-12-09 05:31:38.742656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.846 [2024-12-09 05:31:38.742821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:51.846 [2024-12-09 05:31:38.742930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:51.846 [2024-12-09 05:31:38.742936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.846 [2024-12-09 05:31:38.777624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:52.103 05:31:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:52.361 [2024-12-09 05:31:39.194008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:52.361 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:52.619 Malloc0 00:40:52.619 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:40:52.877 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:53.445 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:53.445 [2024-12-09 05:31:40.413643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:53.703 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:40:53.961 [2024-12-09 05:31:40.677967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:40:53.961 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:40:53.961 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:40:54.221 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:40:54.221 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:40:54.221 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:54.221 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:54.221 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64685 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:40:56.127 05:31:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:40:56.127 [global] 00:40:56.127 thread=1 00:40:56.127 invalidate=1 00:40:56.127 rw=randrw 00:40:56.127 time_based=1 00:40:56.127 runtime=6 00:40:56.127 ioengine=libaio 00:40:56.127 direct=1 00:40:56.127 bs=4096 00:40:56.127 iodepth=128 00:40:56.127 norandommap=0 00:40:56.127 numjobs=1 00:40:56.127 00:40:56.127 verify_dump=1 00:40:56.127 verify_backlog=512 00:40:56.127 verify_state_save=0 00:40:56.127 do_verify=1 00:40:56.127 verify=crc32c-intel 00:40:56.127 [job0] 00:40:56.127 filename=/dev/nvme0n1 00:40:56.127 Could not set queue depth (nvme0n1) 00:40:56.386 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:56.386 fio-3.35 00:40:56.386 Starting 1 thread 00:40:57.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:57.583 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:57.842 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:58.101 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:58.359 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64685 00:41:02.538 00:41:02.538 job0: (groupid=0, jobs=1): err= 0: pid=64706: Mon Dec 9 05:31:49 2024 00:41:02.538 read: IOPS=9905, BW=38.7MiB/s (40.6MB/s)(232MiB/6007msec) 00:41:02.538 slat (usec): min=5, max=6349, avg=59.90, stdev=234.00 00:41:02.538 clat (usec): min=1294, max=15456, avg=8756.22, stdev=1541.71 00:41:02.538 lat (usec): min=1362, max=15467, avg=8816.12, stdev=1545.46 00:41:02.538 clat percentiles (usec): 00:41:02.538 | 1.00th=[ 4490], 5.00th=[ 6587], 10.00th=[ 7504], 20.00th=[ 8029], 00:41:02.538 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:41:02.538 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[12518], 00:41:02.538 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14615], 99.95th=[14615], 00:41:02.538 | 99.99th=[15139] 00:41:02.538 bw ( KiB/s): min= 4760, max=26640, per=51.87%, avg=20552.73, stdev=7165.06, samples=11 00:41:02.538 iops : min= 1190, max= 6660, avg=5138.18, stdev=1791.27, samples=11 00:41:02.538 write: IOPS=5948, BW=23.2MiB/s (24.4MB/s)(124MiB/5335msec); 0 zone resets 00:41:02.538 slat (usec): min=13, max=2489, avg=66.62, stdev=168.91 00:41:02.538 clat (usec): min=1182, max=14869, avg=7591.64, stdev=1349.97 00:41:02.538 lat (usec): min=1207, max=14894, avg=7658.26, stdev=1353.63 00:41:02.538 clat percentiles (usec): 00:41:02.538 | 1.00th=[ 3359], 5.00th=[ 4490], 10.00th=[ 5932], 20.00th=[ 7111], 00:41:02.538 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7963], 00:41:02.538 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 8979], 00:41:02.538 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13566], 99.95th=[13960], 00:41:02.538 | 99.99th=[14484] 00:41:02.538 bw ( KiB/s): min= 5016, max=26432, per=86.75%, avg=20641.36, stdev=7019.47, samples=11 00:41:02.538 iops : min= 1254, max= 6608, avg=5160.27, stdev=1754.87, samples=11 00:41:02.538 lat (msec) : 2=0.03%, 4=1.29%, 10=91.77%, 20=6.91% 00:41:02.538 cpu : usr=5.58%, sys=20.26%, ctx=5401, majf=0, minf=102 00:41:02.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:41:02.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:02.538 issued rwts: total=59503,31734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.538 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:02.538 00:41:02.538 Run status group 0 (all jobs): 00:41:02.538 READ: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=232MiB (244MB), run=6007-6007msec 00:41:02.538 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=124MiB (130MB), run=5335-5335msec 00:41:02.538 00:41:02.538 Disk stats (read/write): 00:41:02.538 nvme0n1: ios=58627/31167, merge=0/0, ticks=493606/222855, in_queue=716461, util=98.58% 00:41:02.538 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:41:02.796 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64783 00:41:03.054 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:41:03.054 [global] 00:41:03.054 thread=1 00:41:03.054 invalidate=1 00:41:03.054 rw=randrw 00:41:03.054 time_based=1 00:41:03.054 runtime=6 00:41:03.054 ioengine=libaio 00:41:03.054 direct=1 00:41:03.054 bs=4096 00:41:03.054 iodepth=128 00:41:03.054 norandommap=0 00:41:03.054 numjobs=1 00:41:03.054 00:41:03.054 verify_dump=1 00:41:03.054 verify_backlog=512 00:41:03.054 verify_state_save=0 00:41:03.054 do_verify=1 00:41:03.054 verify=crc32c-intel 00:41:03.054 [job0] 00:41:03.054 filename=/dev/nvme0n1 00:41:03.054 Could not set queue depth (nvme0n1) 00:41:03.313 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:03.313 fio-3.35 00:41:03.313 Starting 1 thread 00:41:04.250 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:41:04.509 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:04.769 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:41:05.028 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:05.287 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64783 00:41:09.477 00:41:09.477 job0: (groupid=0, jobs=1): err= 0: pid=64810: Mon Dec 9 05:31:56 2024 00:41:09.477 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(259MiB/6006msec) 00:41:09.477 slat (usec): min=3, max=8568, avg=43.61, stdev=193.03 00:41:09.477 clat (usec): min=298, max=16886, avg=7881.17, stdev=2131.47 00:41:09.477 lat (usec): min=319, max=16921, avg=7924.78, stdev=2147.26 00:41:09.477 clat percentiles (usec): 00:41:09.477 | 1.00th=[ 2638], 5.00th=[ 3949], 10.00th=[ 4817], 20.00th=[ 6128], 00:41:09.477 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 00:41:09.477 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[11863], 00:41:09.477 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14615], 99.95th=[15008], 00:41:09.477 | 99.99th=[15664] 00:41:09.477 bw ( KiB/s): min= 9560, max=37904, per=53.39%, avg=23590.55, stdev=8177.37, samples=11 00:41:09.477 iops : min= 2390, max= 9476, avg=5897.64, stdev=2044.34, samples=11 00:41:09.477 write: IOPS=6531, BW=25.5MiB/s (26.8MB/s)(138MiB/5411msec); 0 zone resets 00:41:09.477 slat (usec): min=12, max=2621, avg=56.81, stdev=143.33 00:41:09.477 clat (usec): min=382, max=14789, avg=6782.33, stdev=1861.52 00:41:09.477 lat (usec): min=406, max=14855, avg=6839.14, stdev=1877.10 00:41:09.477 clat percentiles (usec): 00:41:09.477 | 1.00th=[ 2606], 5.00th=[ 3458], 10.00th=[ 3949], 20.00th=[ 4752], 00:41:09.477 | 30.00th=[ 5735], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 7701], 00:41:09.477 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8848], 00:41:09.478 | 99.00th=[11076], 99.50th=[11994], 99.90th=[13304], 99.95th=[13960], 00:41:09.478 | 99.99th=[14746] 00:41:09.478 bw ( KiB/s): min= 9840, max=37056, per=90.38%, avg=23615.91, stdev=7993.90, samples=11 00:41:09.478 iops : min= 2460, max= 9264, avg=5903.91, stdev=1998.40, samples=11 00:41:09.478 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.05% 00:41:09.478 lat (msec) : 2=0.29%, 4=6.66%, 10=87.41%, 20=5.55% 00:41:09.478 cpu : usr=5.80%, sys=23.00%, ctx=5754, majf=0, minf=102 00:41:09.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:41:09.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.478 issued rwts: total=66343,35344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.478 00:41:09.478 Run status group 0 (all jobs): 00:41:09.478 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=259MiB (272MB), run=6006-6006msec 00:41:09.478 WRITE: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=138MiB (145MB), run=5411-5411msec 00:41:09.478 00:41:09.478 Disk stats (read/write): 00:41:09.478 nvme0n1: ios=65498/34778, merge=0/0, ticks=494020/220739, in_queue=714759, util=98.58% 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:09.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:41:09.478 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:09.735 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:09.992 rmmod nvme_tcp 00:41:09.992 rmmod nvme_fabrics 00:41:09.992 rmmod nvme_keyring 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64596 ']' 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64596 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64596 ']' 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64596 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:09.992 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64596 00:41:09.993 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:09.993 killing process with pid 64596 00:41:09.993 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:09.993 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64596' 00:41:09.993 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64596 00:41:09.993 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64596 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:10.250 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.250 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.508 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:41:10.508 00:41:10.508 real 0m19.410s 00:41:10.508 user 1m11.551s 00:41:10.508 sys 0m10.063s 00:41:10.508 ************************************ 00:41:10.508 END TEST nvmf_target_multipath 00:41:10.508 ************************************ 00:41:10.508 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.508 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:10.509 ************************************ 00:41:10.509 START TEST nvmf_zcopy 00:41:10.509 ************************************ 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:41:10.509 * Looking for test storage... 00:41:10.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.509 --rc genhtml_branch_coverage=1 00:41:10.509 --rc genhtml_function_coverage=1 00:41:10.509 --rc genhtml_legend=1 00:41:10.509 --rc geninfo_all_blocks=1 00:41:10.509 --rc geninfo_unexecuted_blocks=1 00:41:10.509 00:41:10.509 ' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.509 --rc genhtml_branch_coverage=1 00:41:10.509 --rc genhtml_function_coverage=1 00:41:10.509 --rc genhtml_legend=1 00:41:10.509 --rc geninfo_all_blocks=1 00:41:10.509 --rc geninfo_unexecuted_blocks=1 00:41:10.509 00:41:10.509 ' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.509 --rc genhtml_branch_coverage=1 00:41:10.509 --rc genhtml_function_coverage=1 00:41:10.509 --rc genhtml_legend=1 00:41:10.509 --rc geninfo_all_blocks=1 00:41:10.509 --rc geninfo_unexecuted_blocks=1 00:41:10.509 00:41:10.509 ' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.509 --rc genhtml_branch_coverage=1 00:41:10.509 --rc genhtml_function_coverage=1 00:41:10.509 --rc genhtml_legend=1 00:41:10.509 --rc geninfo_all_blocks=1 00:41:10.509 --rc geninfo_unexecuted_blocks=1 00:41:10.509 00:41:10.509 ' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:10.509 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:10.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.510 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:10.767 Cannot find device "nvmf_init_br" 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:10.767 Cannot find device "nvmf_init_br2" 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:10.767 Cannot find device "nvmf_tgt_br" 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:10.767 Cannot find device "nvmf_tgt_br2" 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:10.767 Cannot find device "nvmf_init_br" 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:10.767 Cannot find device "nvmf_init_br2" 00:41:10.767 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:10.768 Cannot find device "nvmf_tgt_br" 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:10.768 Cannot find device "nvmf_tgt_br2" 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:10.768 Cannot find device "nvmf_br" 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:10.768 Cannot find device "nvmf_init_if" 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:10.768 Cannot find device "nvmf_init_if2" 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:10.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:10.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:10.768 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:11.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:11.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:41:11.026 00:41:11.026 --- 10.0.0.3 ping statistics --- 00:41:11.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.026 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:11.026 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:11.026 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:41:11.026 00:41:11.026 --- 10.0.0.4 ping statistics --- 00:41:11.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.026 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:11.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:11.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:41:11.026 00:41:11.026 --- 10.0.0.1 ping statistics --- 00:41:11.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.026 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:11.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:11.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:41:11.026 00:41:11.026 --- 10.0.0.2 ping statistics --- 00:41:11.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.026 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65113 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65113 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65113 ']' 00:41:11.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:11.026 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.026 [2024-12-09 05:31:57.940552] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:11.026 [2024-12-09 05:31:57.940896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:11.284 [2024-12-09 05:31:58.110546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.284 [2024-12-09 05:31:58.153535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:11.284 [2024-12-09 05:31:58.153651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:11.284 [2024-12-09 05:31:58.153668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:11.284 [2024-12-09 05:31:58.153679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:11.284 [2024-12-09 05:31:58.153689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:11.284 [2024-12-09 05:31:58.154247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:11.284 [2024-12-09 05:31:58.184476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:11.284 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:11.284 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:11.284 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:11.284 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:11.284 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 [2024-12-09 05:31:58.287009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 [2024-12-09 05:31:58.303075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 malloc0 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:11.541 { 00:41:11.541 "params": { 00:41:11.541 "name": "Nvme$subsystem", 00:41:11.541 "trtype": "$TEST_TRANSPORT", 00:41:11.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:11.541 "adrfam": "ipv4", 00:41:11.541 "trsvcid": "$NVMF_PORT", 00:41:11.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:11.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:11.541 "hdgst": ${hdgst:-false}, 00:41:11.541 "ddgst": ${ddgst:-false} 00:41:11.541 }, 00:41:11.541 "method": "bdev_nvme_attach_controller" 00:41:11.541 } 00:41:11.541 EOF 00:41:11.541 )") 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:11.541 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:11.541 "params": { 00:41:11.541 "name": "Nvme1", 00:41:11.541 "trtype": "tcp", 00:41:11.541 "traddr": "10.0.0.3", 00:41:11.541 "adrfam": "ipv4", 00:41:11.541 "trsvcid": "4420", 00:41:11.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:11.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:11.541 "hdgst": false, 00:41:11.541 "ddgst": false 00:41:11.541 }, 00:41:11.541 "method": "bdev_nvme_attach_controller" 00:41:11.541 }' 00:41:11.541 [2024-12-09 05:31:58.394361] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:11.541 [2024-12-09 05:31:58.394468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65139 ] 00:41:11.798 [2024-12-09 05:31:58.551387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.798 [2024-12-09 05:31:58.590337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.798 [2024-12-09 05:31:58.631241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:11.798 Running I/O for 10 seconds... 00:41:14.105 6096.00 IOPS, 47.62 MiB/s [2024-12-09T05:32:02.034Z] 6172.50 IOPS, 48.22 MiB/s [2024-12-09T05:32:02.970Z] 6221.33 IOPS, 48.60 MiB/s [2024-12-09T05:32:03.906Z] 6333.50 IOPS, 49.48 MiB/s [2024-12-09T05:32:04.860Z] 6381.00 IOPS, 49.85 MiB/s [2024-12-09T05:32:05.793Z] 6389.17 IOPS, 49.92 MiB/s [2024-12-09T05:32:07.165Z] 6368.86 IOPS, 49.76 MiB/s [2024-12-09T05:32:08.097Z] 6355.88 IOPS, 49.66 MiB/s [2024-12-09T05:32:09.042Z] 6332.11 IOPS, 49.47 MiB/s [2024-12-09T05:32:09.042Z] 6307.70 IOPS, 49.28 MiB/s 00:41:22.070 Latency(us) 00:41:22.070 [2024-12-09T05:32:09.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.070 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:22.070 Verification LBA range: start 0x0 length 0x1000 00:41:22.070 Nvme1n1 : 10.01 6310.33 49.30 0.00 0.00 20219.56 2159.71 33602.09 00:41:22.070 [2024-12-09T05:32:09.042Z] =================================================================================================================== 00:41:22.070 [2024-12-09T05:32:09.042Z] Total : 6310.33 49.30 0.00 0.00 20219.56 2159.71 33602.09 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65256 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:22.070 { 00:41:22.070 "params": { 00:41:22.070 "name": "Nvme$subsystem", 00:41:22.070 "trtype": "$TEST_TRANSPORT", 00:41:22.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:22.070 "adrfam": "ipv4", 00:41:22.070 "trsvcid": "$NVMF_PORT", 00:41:22.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:22.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:22.070 "hdgst": ${hdgst:-false}, 00:41:22.070 "ddgst": ${ddgst:-false} 00:41:22.070 }, 00:41:22.070 "method": "bdev_nvme_attach_controller" 00:41:22.070 } 00:41:22.070 EOF 00:41:22.070 )") 00:41:22.070 [2024-12-09 05:32:08.935038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.070 [2024-12-09 05:32:08.935077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:22.070 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:22.070 "params": { 00:41:22.070 "name": "Nvme1", 00:41:22.070 "trtype": "tcp", 00:41:22.070 "traddr": "10.0.0.3", 00:41:22.070 "adrfam": "ipv4", 00:41:22.070 "trsvcid": "4420", 00:41:22.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:22.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:22.070 "hdgst": false, 00:41:22.070 "ddgst": false 00:41:22.070 }, 00:41:22.070 "method": "bdev_nvme_attach_controller" 00:41:22.070 }' 00:41:22.070 [2024-12-09 05:32:08.947034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:08.947241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:08.959050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:08.959078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:08.971034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:08.971060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:08.983028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:08.983054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:08.989677] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:22.071 [2024-12-09 05:32:08.989780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65256 ] 00:41:22.071 [2024-12-09 05:32:08.995046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:08.995070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:09.007045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:09.007070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:09.019051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:09.019076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.071 [2024-12-09 05:32:09.031060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.071 [2024-12-09 05:32:09.031084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.043077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.043264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.055074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.055100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.067073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.067099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.079078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.079106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.091082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.091108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.103081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.103106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.119088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.119114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.131089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.131114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.139092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.139117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.141669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.330 [2024-12-09 05:32:09.147107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.147135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.159119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.159151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.167113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.167143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.174697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.330 [2024-12-09 05:32:09.175115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.175149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.183119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.183146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.191144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.191184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.203155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.203196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.213500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:22.330 [2024-12-09 05:32:09.215156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.215204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.227178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.227229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.239146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.239189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.251163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.251209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.263172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.263208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.275226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.275299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.330 [2024-12-09 05:32:09.287271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.330 [2024-12-09 05:32:09.287322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.299210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.299268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.311223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.311282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.319199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.319228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 Running I/O for 5 seconds... 00:41:22.589 [2024-12-09 05:32:09.336603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.336653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.352961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.352995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.371136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.371169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.384832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.384866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.400861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.400922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.417141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.417202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.433239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.433316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.449732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.449792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.467073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.467123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.477839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.477906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.493564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.493848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.509391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.509600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.589 [2024-12-09 05:32:09.518929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.589 [2024-12-09 05:32:09.518979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.590 [2024-12-09 05:32:09.531125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.590 [2024-12-09 05:32:09.531157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.590 [2024-12-09 05:32:09.547167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.590 [2024-12-09 05:32:09.547199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.848 [2024-12-09 05:32:09.563018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.848 [2024-12-09 05:32:09.563051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.848 [2024-12-09 05:32:09.572181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.848 [2024-12-09 05:32:09.572346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.848 [2024-12-09 05:32:09.588185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.848 [2024-12-09 05:32:09.588220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.848 [2024-12-09 05:32:09.604719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.848 [2024-12-09 05:32:09.604752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.623116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.623148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.638291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.638481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.647340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.647515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.663511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.663697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.672375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.672548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.688032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.688206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.703558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.703768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.720701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.720881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.737203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.737433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.753596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.753792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.764178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.764402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.779588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.779801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.796450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.796663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.806541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.806754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.849 [2024-12-09 05:32:09.818420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.849 [2024-12-09 05:32:09.818627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.830167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.830399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.846093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.846240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.863482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.863710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.874052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.874199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.888813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.889013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.905004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.905038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.913835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.913869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.928914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.928963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.944498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.944535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.961540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.961591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.971169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.971354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:09.986464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:09.986629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:10.002828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:10.002999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:10.012576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:10.012736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:10.024370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:10.024523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:10.035771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:10.035961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:10.048967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:10.049154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.108 [2024-12-09 05:32:10.065475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.108 [2024-12-09 05:32:10.065652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.081628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.081901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.097510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.097689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.113994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.114150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.129518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.129709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.138543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.138730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.151664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.151831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.169160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.169339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.183813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.184134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.194289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.194480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.206640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.206886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.222488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.222854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.239109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.239162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.248821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.248870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.263432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.263498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.281374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.281418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.297459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.297514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.307492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.307557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.367 [2024-12-09 05:32:10.323098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.367 [2024-12-09 05:32:10.323159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 11682.00 IOPS, 91.27 MiB/s [2024-12-09T05:32:10.598Z] [2024-12-09 05:32:10.340643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.340738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.356162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.356195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.364967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.365000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.378000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.378165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.388220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.388268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.403440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.403473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.419802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.419835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.429417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.429617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.443970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.444003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.453328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.453509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.468197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.468376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.478077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.478308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.492739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.492899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.503512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.503608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.515805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.515965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.531335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.531519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.547431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.547625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.556734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.556898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.569117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.569305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.579146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.579330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-12-09 05:32:10.594538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-12-09 05:32:10.594776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.610763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.610924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.621127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.621309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.633238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.633421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.644891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.645035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.660486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.660686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.676885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.677098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.693432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-12-09 05:32:10.693759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-12-09 05:32:10.711453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.711776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.726567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.726903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.743358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.743733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.759624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.759685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.769468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.769513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.780520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.780555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.792168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.792225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.808511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.808582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.824797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.824860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.833787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.833838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.886 [2024-12-09 05:32:10.849152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.886 [2024-12-09 05:32:10.849457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.859221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.859287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.875041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.875074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.892718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.892768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.902667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.902730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.913325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.913357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.924996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.925044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.940766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.940799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.959028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.959062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.974544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.974654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:10.990515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:10.990548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.000918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.000954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.015968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.016142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.026850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.027027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.042812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.042846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.058445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.058482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.069303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.069502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.082206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.082260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.093169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.093220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.145 [2024-12-09 05:32:11.106744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.145 [2024-12-09 05:32:11.106778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.122959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.122995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.132644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.132677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.146663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.146711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.156352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.156528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.171260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.171425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.187172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.187358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.197007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.197152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.213552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.213745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.229327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.229494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.239543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.239759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.251439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.251633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.261842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.262027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.276240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.276422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.286257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.286468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.300812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.300986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.317613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.317792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 11663.00 IOPS, 91.12 MiB/s [2024-12-09T05:32:11.377Z] [2024-12-09 05:32:11.327681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.327863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.341700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.341871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.357223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.357425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.405 [2024-12-09 05:32:11.366639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.405 [2024-12-09 05:32:11.366824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.379117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.379155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.390995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.391047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.406439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.406473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.422979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.423012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.440171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.440221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.457159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.457405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.468149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.468320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.482408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.482709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.498141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.498491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.507545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.507793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.523944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.524235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.534626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.534794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.549248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.549469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.566917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.567198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.682 [2024-12-09 05:32:11.576643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.682 [2024-12-09 05:32:11.576834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.683 [2024-12-09 05:32:11.591238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.683 [2024-12-09 05:32:11.591524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.683 [2024-12-09 05:32:11.607881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.683 [2024-12-09 05:32:11.608067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.683 [2024-12-09 05:32:11.624994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.683 [2024-12-09 05:32:11.625160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.683 [2024-12-09 05:32:11.635447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.683 [2024-12-09 05:32:11.635624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.650850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.650997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.668083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.668244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.678061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.678208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.691886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.691967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.702398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.702434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.717929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.717967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.733725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.733759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.743561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.743652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.759293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.759328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.776300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.776335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.786608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.786668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.801081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.801255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.819067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.819102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.833384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.833417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.842777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.842810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.857532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.857595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.873371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.873407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.882738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.882773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.895470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.895504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.905753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.905786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.919794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.919828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-12-09 05:32:11.936229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-12-09 05:32:11.936263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:11.953040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:11.953090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:11.969944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:11.969982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:11.984944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:11.984993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:11.994822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:11.994857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.007290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.007324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.018164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.018200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.035538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.035619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.051874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.051931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.061620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.061679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.076135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.076197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.233 [2024-12-09 05:32:12.091891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.233 [2024-12-09 05:32:12.091972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.109052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.109335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.125481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.125531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.142719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.142778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.159113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.159176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.168582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.168888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.183129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.183426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.234 [2024-12-09 05:32:12.193620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.234 [2024-12-09 05:32:12.193954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.492 [2024-12-09 05:32:12.209057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.492 [2024-12-09 05:32:12.209367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.492 [2024-12-09 05:32:12.225057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.492 [2024-12-09 05:32:12.225239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.492 [2024-12-09 05:32:12.235404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.235691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.250677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.250899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.267038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.267197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.277031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.277195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.288151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.288330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.299632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.299813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.315616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.315841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 11641.33 IOPS, 90.95 MiB/s [2024-12-09T05:32:12.465Z] [2024-12-09 05:32:12.326066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.326214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.341194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.341422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.357530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.357725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.367515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.367686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.383056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.383203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.399050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.399197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.409430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.409618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.425136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.425356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.440990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.441134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.493 [2024-12-09 05:32:12.457611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.493 [2024-12-09 05:32:12.457821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.468311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.468497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.480609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.480806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.491041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.491202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.505170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.505350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.514583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.514774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.528783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.528929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.539294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.539447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.554861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.554899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.570676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.570712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.581187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.581235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.752 [2024-12-09 05:32:12.596616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.752 [2024-12-09 05:32:12.596785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.607247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.607283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.621408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.621443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.631384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.631417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.646855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.646905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.664589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.664787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.674891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.674938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.685329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.685361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.695363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.695396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.753 [2024-12-09 05:32:12.709879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.753 [2024-12-09 05:32:12.709968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.728687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.728721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.739239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.739272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.751784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.751819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.769238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.769272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.784680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.784715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.793879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.793956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.810484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.810525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.827440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.827485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.837154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.837202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.851206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.851262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.860877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.860935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.875433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.875481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.884437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.884728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.901104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.901155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.919243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.919298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.935455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.935514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.945100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.945143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.955411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.955445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.966048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.966083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.012 [2024-12-09 05:32:12.976334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.012 [2024-12-09 05:32:12.976524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:12.988320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:12.988355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:12.999269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:12.999452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.012214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.012248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.030883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.030918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.044757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.044800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.054046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.054222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.069452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.069667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.084273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.084453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.093483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.093693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.107447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.107656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.117529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.117755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.133098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.133304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.149311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.149476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.159733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.159916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.174008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.174154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.185148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.185456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.200204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.200525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.210058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.210410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.220990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.221292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.272 [2024-12-09 05:32:13.233396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.272 [2024-12-09 05:32:13.233782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.531 [2024-12-09 05:32:13.249211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.531 [2024-12-09 05:32:13.249520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.531 [2024-12-09 05:32:13.267184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.531 [2024-12-09 05:32:13.267497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.531 [2024-12-09 05:32:13.281785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.531 [2024-12-09 05:32:13.282084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.297527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.297926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.307119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.307419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.320409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.320706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 11628.75 IOPS, 90.85 MiB/s [2024-12-09T05:32:13.504Z] [2024-12-09 05:32:13.331097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.331396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.345250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.345548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.361682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.361980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.378102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.378479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.394387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.394420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.412116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.412310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.421990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.422026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.437576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.437685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.453328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.453367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.462924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.463079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.478705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.478755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.532 [2024-12-09 05:32:13.489440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.532 [2024-12-09 05:32:13.489474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.505284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.505321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.520599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.520651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.536294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.536329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.546730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.546765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.561766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.561800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.578704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.578741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.595271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.595306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.612072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.612107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.622355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.622389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.634129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.634181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.644476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.644511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.658885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.658919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.668625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.668685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.683184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.683267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.693967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.694005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.709112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.709147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.726071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.726108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.744551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.744783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.791 [2024-12-09 05:32:13.754980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.791 [2024-12-09 05:32:13.755156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.770693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.770873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.780167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.780378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.794601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.794814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.803770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.803950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.817536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.817749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.827984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.828165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.842172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.842318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.859239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.859423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.868729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.868895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.883337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.883592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.902257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.902453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.916138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.916312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.933027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.933204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.947945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.948127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.957879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.958052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.972847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.050 [2024-12-09 05:32:13.973000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.050 [2024-12-09 05:32:13.983387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.051 [2024-12-09 05:32:13.983556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.051 [2024-12-09 05:32:13.997791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.051 [2024-12-09 05:32:13.997971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.051 [2024-12-09 05:32:14.007852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.051 [2024-12-09 05:32:14.008053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.023871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.024065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.040132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.040167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.057071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.057106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.074783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.074817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.089466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.089503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.098517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.098723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.114516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.114717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.125027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.125063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.139819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.139856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.150386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.150535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.165559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.165775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.181983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.182020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.198945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.198981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.214194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.214242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.231509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.231543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.246352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.246387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.255787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.255822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.309 [2024-12-09 05:32:14.272298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.309 [2024-12-09 05:32:14.272337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.289520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.289700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.304599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.304636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.314096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.314134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 11631.40 IOPS, 90.87 MiB/s [2024-12-09T05:32:14.540Z] [2024-12-09 05:32:14.328150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.328316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.334131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.334167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 00:41:27.568 Latency(us) 00:41:27.568 [2024-12-09T05:32:14.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:27.568 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:27.568 Nvme1n1 : 5.01 11631.89 90.87 0.00 0.00 10989.93 4408.79 19779.96 00:41:27.568 [2024-12-09T05:32:14.540Z] =================================================================================================================== 00:41:27.568 [2024-12-09T05:32:14.540Z] Total : 11631.89 90.87 0.00 0.00 10989.93 4408.79 19779.96 00:41:27.568 [2024-12-09 05:32:14.346148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.346185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.354150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.354182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.366180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.366244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.378195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.378548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.390210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.390280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.402194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.402277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.414186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.414256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.422164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.422329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.430173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.430203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.438181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.438215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.450190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.568 [2024-12-09 05:32:14.450243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.568 [2024-12-09 05:32:14.458173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.458330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.466180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.466211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.474179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.474209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.482182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.482213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.494191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.494222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.502183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.502210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.510184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.510212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 [2024-12-09 05:32:14.518186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.569 [2024-12-09 05:32:14.518241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.569 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65256) - No such process 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65256 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.569 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.827 delay0 00:41:27.827 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.827 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:27.827 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.827 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.827 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.827 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:41:27.827 [2024-12-09 05:32:14.724859] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:34.388 Initializing NVMe Controllers 00:41:34.388 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:41:34.388 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:34.388 Initialization complete. Launching workers. 00:41:34.388 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 64 00:41:34.388 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 351, failed to submit 33 00:41:34.388 success 220, unsuccessful 131, failed 0 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:34.388 rmmod nvme_tcp 00:41:34.388 rmmod nvme_fabrics 00:41:34.388 rmmod nvme_keyring 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65113 ']' 00:41:34.388 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65113 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65113 ']' 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65113 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65113 00:41:34.389 killing process with pid 65113 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65113' 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65113 00:41:34.389 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65113 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:34.389 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:41:34.648 00:41:34.648 real 0m24.127s 00:41:34.648 user 0m39.815s 00:41:34.648 sys 0m6.346s 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:34.648 ************************************ 00:41:34.648 END TEST nvmf_zcopy 00:41:34.648 ************************************ 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:34.648 ************************************ 00:41:34.648 START TEST nvmf_nmic 00:41:34.648 ************************************ 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:41:34.648 * Looking for test storage... 00:41:34.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:41:34.648 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.908 --rc genhtml_branch_coverage=1 00:41:34.908 --rc genhtml_function_coverage=1 00:41:34.908 --rc genhtml_legend=1 00:41:34.908 --rc geninfo_all_blocks=1 00:41:34.908 --rc geninfo_unexecuted_blocks=1 00:41:34.908 00:41:34.908 ' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.908 --rc genhtml_branch_coverage=1 00:41:34.908 --rc genhtml_function_coverage=1 00:41:34.908 --rc genhtml_legend=1 00:41:34.908 --rc geninfo_all_blocks=1 00:41:34.908 --rc geninfo_unexecuted_blocks=1 00:41:34.908 00:41:34.908 ' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.908 --rc genhtml_branch_coverage=1 00:41:34.908 --rc genhtml_function_coverage=1 00:41:34.908 --rc genhtml_legend=1 00:41:34.908 --rc geninfo_all_blocks=1 00:41:34.908 --rc geninfo_unexecuted_blocks=1 00:41:34.908 00:41:34.908 ' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.908 --rc genhtml_branch_coverage=1 00:41:34.908 --rc genhtml_function_coverage=1 00:41:34.908 --rc genhtml_legend=1 00:41:34.908 --rc geninfo_all_blocks=1 00:41:34.908 --rc geninfo_unexecuted_blocks=1 00:41:34.908 00:41:34.908 ' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:34.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:34.908 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:34.909 Cannot find device "nvmf_init_br" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:34.909 Cannot find device "nvmf_init_br2" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:34.909 Cannot find device "nvmf_tgt_br" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:34.909 Cannot find device "nvmf_tgt_br2" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:34.909 Cannot find device "nvmf_init_br" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:34.909 Cannot find device "nvmf_init_br2" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:34.909 Cannot find device "nvmf_tgt_br" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:34.909 Cannot find device "nvmf_tgt_br2" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:34.909 Cannot find device "nvmf_br" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:34.909 Cannot find device "nvmf_init_if" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:34.909 Cannot find device "nvmf_init_if2" 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:34.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:34.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:34.909 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:35.169 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:35.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:35.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:41:35.169 00:41:35.169 --- 10.0.0.3 ping statistics --- 00:41:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:35.169 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:35.169 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:35.169 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:41:35.169 00:41:35.169 --- 10.0.0.4 ping statistics --- 00:41:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:35.169 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:35.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:35.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:41:35.169 00:41:35.169 --- 10.0.0.1 ping statistics --- 00:41:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:35.169 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:35.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:35.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:41:35.169 00:41:35.169 --- 10.0.0.2 ping statistics --- 00:41:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:35.169 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65637 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65637 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65637 ']' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:35.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:35.169 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.169 [2024-12-09 05:32:22.120227] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:35.169 [2024-12-09 05:32:22.120320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:35.428 [2024-12-09 05:32:22.281408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:35.428 [2024-12-09 05:32:22.322770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:35.428 [2024-12-09 05:32:22.323039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:35.428 [2024-12-09 05:32:22.323224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:35.428 [2024-12-09 05:32:22.323378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:35.428 [2024-12-09 05:32:22.323423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:35.428 [2024-12-09 05:32:22.324528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:35.428 [2024-12-09 05:32:22.324612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:35.428 [2024-12-09 05:32:22.324707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.428 [2024-12-09 05:32:22.324701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:35.428 [2024-12-09 05:32:22.359156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 [2024-12-09 05:32:22.459173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 Malloc0 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 [2024-12-09 05:32:22.525891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:35.688 test case1: single bdev can't be used in multiple subsystems 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 [2024-12-09 05:32:22.553700] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:35.688 [2024-12-09 05:32:22.553746] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:35.688 [2024-12-09 05:32:22.553761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:35.688 request: 00:41:35.688 { 00:41:35.688 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:35.688 "namespace": { 00:41:35.688 "bdev_name": "Malloc0", 00:41:35.688 "no_auto_visible": false, 00:41:35.688 "hide_metadata": false 00:41:35.688 }, 00:41:35.688 "method": "nvmf_subsystem_add_ns", 00:41:35.688 "req_id": 1 00:41:35.688 } 00:41:35.688 Got JSON-RPC error response 00:41:35.688 response: 00:41:35.688 { 00:41:35.688 "code": -32602, 00:41:35.688 "message": "Invalid parameters" 00:41:35.688 } 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:35.688 Adding namespace failed - expected result. 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:35.688 test case2: host connect to nvmf target in multiple paths 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.688 [2024-12-09 05:32:22.569831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.688 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:41:35.947 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:41:35.947 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:35.947 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:35.947 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:35.947 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:35.947 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:38.503 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:38.503 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:38.503 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:38.503 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:38.503 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:38.503 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:38.504 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:38.504 [global] 00:41:38.504 thread=1 00:41:38.504 invalidate=1 00:41:38.504 rw=write 00:41:38.504 time_based=1 00:41:38.504 runtime=1 00:41:38.504 ioengine=libaio 00:41:38.504 direct=1 00:41:38.504 bs=4096 00:41:38.504 iodepth=1 00:41:38.504 norandommap=0 00:41:38.504 numjobs=1 00:41:38.504 00:41:38.504 verify_dump=1 00:41:38.504 verify_backlog=512 00:41:38.504 verify_state_save=0 00:41:38.504 do_verify=1 00:41:38.504 verify=crc32c-intel 00:41:38.504 [job0] 00:41:38.504 filename=/dev/nvme0n1 00:41:38.504 Could not set queue depth (nvme0n1) 00:41:38.504 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:38.504 fio-3.35 00:41:38.504 Starting 1 thread 00:41:39.437 00:41:39.437 job0: (groupid=0, jobs=1): err= 0: pid=65721: Mon Dec 9 05:32:26 2024 00:41:39.437 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:41:39.437 slat (nsec): min=12466, max=68462, avg=16934.48, stdev=5550.05 00:41:39.437 clat (usec): min=141, max=516, avg=192.06, stdev=24.38 00:41:39.437 lat (usec): min=156, max=543, avg=209.00, stdev=26.03 00:41:39.437 clat percentiles (usec): 00:41:39.437 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 174], 00:41:39.437 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:41:39.437 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:41:39.437 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 469], 00:41:39.437 | 99.99th=[ 519] 00:41:39.437 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:41:39.437 slat (usec): min=17, max=120, avg=25.89, stdev=10.37 00:41:39.437 clat (usec): min=54, max=255, avg=125.61, stdev=27.80 00:41:39.437 lat (usec): min=105, max=343, avg=151.49, stdev=34.52 00:41:39.437 clat percentiles (usec): 00:41:39.437 | 1.00th=[ 93], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 106], 00:41:39.437 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 123], 00:41:39.437 | 70.00th=[ 130], 80.00th=[ 141], 90.00th=[ 165], 95.00th=[ 190], 00:41:39.437 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 239], 99.95th=[ 239], 00:41:39.437 | 99.99th=[ 255] 00:41:39.437 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:41:39.437 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:39.437 lat (usec) : 100=4.60%, 250=94.53%, 500=0.85%, 750=0.02% 00:41:39.437 cpu : usr=3.00%, sys=8.70%, ctx=5560, majf=0, minf=5 00:41:39.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:39.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.437 issued rwts: total=2560,3000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:39.437 00:41:39.437 Run status group 0 (all jobs): 00:41:39.437 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:41:39.437 WRITE: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:41:39.437 00:41:39.437 Disk stats (read/write): 00:41:39.437 nvme0n1: ios=2404/2560, merge=0/0, ticks=505/360, in_queue=865, util=91.78% 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:39.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:39.437 rmmod nvme_tcp 00:41:39.437 rmmod nvme_fabrics 00:41:39.437 rmmod nvme_keyring 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65637 ']' 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65637 00:41:39.437 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65637 ']' 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65637 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65637 00:41:39.438 killing process with pid 65637 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65637' 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65637 00:41:39.438 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65637 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:39.696 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:41:39.954 ************************************ 00:41:39.954 END TEST nvmf_nmic 00:41:39.954 ************************************ 00:41:39.954 00:41:39.954 real 0m5.346s 00:41:39.954 user 0m15.656s 00:41:39.954 sys 0m2.261s 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:39.954 ************************************ 00:41:39.954 START TEST nvmf_fio_target 00:41:39.954 ************************************ 00:41:39.954 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:41:39.954 * Looking for test storage... 00:41:40.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:40.213 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:40.213 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:40.213 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:40.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.213 --rc genhtml_branch_coverage=1 00:41:40.213 --rc genhtml_function_coverage=1 00:41:40.213 --rc genhtml_legend=1 00:41:40.213 --rc geninfo_all_blocks=1 00:41:40.213 --rc geninfo_unexecuted_blocks=1 00:41:40.213 00:41:40.213 ' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:40.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.213 --rc genhtml_branch_coverage=1 00:41:40.213 --rc genhtml_function_coverage=1 00:41:40.213 --rc genhtml_legend=1 00:41:40.213 --rc geninfo_all_blocks=1 00:41:40.213 --rc geninfo_unexecuted_blocks=1 00:41:40.213 00:41:40.213 ' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:40.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.213 --rc genhtml_branch_coverage=1 00:41:40.213 --rc genhtml_function_coverage=1 00:41:40.213 --rc genhtml_legend=1 00:41:40.213 --rc geninfo_all_blocks=1 00:41:40.213 --rc geninfo_unexecuted_blocks=1 00:41:40.213 00:41:40.213 ' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:40.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.213 --rc genhtml_branch_coverage=1 00:41:40.213 --rc genhtml_function_coverage=1 00:41:40.213 --rc genhtml_legend=1 00:41:40.213 --rc geninfo_all_blocks=1 00:41:40.213 --rc geninfo_unexecuted_blocks=1 00:41:40.213 00:41:40.213 ' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:40.213 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:40.214 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:40.214 Cannot find device "nvmf_init_br" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:40.214 Cannot find device "nvmf_init_br2" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:40.214 Cannot find device "nvmf_tgt_br" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:40.214 Cannot find device "nvmf_tgt_br2" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:40.214 Cannot find device "nvmf_init_br" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:40.214 Cannot find device "nvmf_init_br2" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:40.214 Cannot find device "nvmf_tgt_br" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:40.214 Cannot find device "nvmf_tgt_br2" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:40.214 Cannot find device "nvmf_br" 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:41:40.214 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:40.214 Cannot find device "nvmf_init_if" 00:41:40.215 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:41:40.215 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:40.472 Cannot find device "nvmf_init_if2" 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:40.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:40.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:40.472 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:40.473 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:40.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:40.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:41:40.731 00:41:40.731 --- 10.0.0.3 ping statistics --- 00:41:40.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.731 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:40.731 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:40.731 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:41:40.731 00:41:40.731 --- 10.0.0.4 ping statistics --- 00:41:40.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.731 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:40.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:41:40.731 00:41:40.731 --- 10.0.0.1 ping statistics --- 00:41:40.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.731 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:40.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:41:40.731 00:41:40.731 --- 10.0.0.2 ping statistics --- 00:41:40.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.731 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:40.731 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65948 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65948 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65948 ']' 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:40.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:40.732 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:40.732 [2024-12-09 05:32:27.551330] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:40.732 [2024-12-09 05:32:27.551415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.001 [2024-12-09 05:32:27.701933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:41.001 [2024-12-09 05:32:27.732539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.001 [2024-12-09 05:32:27.733041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.001 [2024-12-09 05:32:27.733285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.001 [2024-12-09 05:32:27.733545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.001 [2024-12-09 05:32:27.733778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.001 [2024-12-09 05:32:27.734737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.001 [2024-12-09 05:32:27.734905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:41.001 [2024-12-09 05:32:27.735295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.001 [2024-12-09 05:32:27.735461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:41.001 [2024-12-09 05:32:27.766355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.001 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:41.260 [2024-12-09 05:32:28.140212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:41.260 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:41.827 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:41.827 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:41.827 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:41.827 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:42.392 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:42.392 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:42.650 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:42.650 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:42.907 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:43.166 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:43.166 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:43.424 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:43.424 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:43.683 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:43.683 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:43.941 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:44.200 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:44.200 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:44.459 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:44.459 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:44.718 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:44.976 [2024-12-09 05:32:31.833314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:44.976 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:45.235 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:45.494 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:41:45.753 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:45.753 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:45.753 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:45.753 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:45.753 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:45.753 05:32:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:47.668 05:32:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:47.668 [global] 00:41:47.668 thread=1 00:41:47.668 invalidate=1 00:41:47.668 rw=write 00:41:47.668 time_based=1 00:41:47.668 runtime=1 00:41:47.668 ioengine=libaio 00:41:47.668 direct=1 00:41:47.668 bs=4096 00:41:47.668 iodepth=1 00:41:47.668 norandommap=0 00:41:47.668 numjobs=1 00:41:47.668 00:41:47.668 verify_dump=1 00:41:47.668 verify_backlog=512 00:41:47.668 verify_state_save=0 00:41:47.668 do_verify=1 00:41:47.668 verify=crc32c-intel 00:41:47.668 [job0] 00:41:47.668 filename=/dev/nvme0n1 00:41:47.668 [job1] 00:41:47.668 filename=/dev/nvme0n2 00:41:47.668 [job2] 00:41:47.668 filename=/dev/nvme0n3 00:41:47.668 [job3] 00:41:47.668 filename=/dev/nvme0n4 00:41:47.668 Could not set queue depth (nvme0n1) 00:41:47.668 Could not set queue depth (nvme0n2) 00:41:47.668 Could not set queue depth (nvme0n3) 00:41:47.668 Could not set queue depth (nvme0n4) 00:41:47.928 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.928 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.928 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.928 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.928 fio-3.35 00:41:47.928 Starting 4 threads 00:41:49.303 00:41:49.303 job0: (groupid=0, jobs=1): err= 0: pid=66129: Mon Dec 9 05:32:35 2024 00:41:49.303 read: IOPS=1955, BW=7820KiB/s (8008kB/s)(7828KiB/1001msec) 00:41:49.303 slat (nsec): min=12058, max=38864, avg=14501.68, stdev=3154.39 00:41:49.303 clat (usec): min=139, max=973, avg=279.07, stdev=51.50 00:41:49.303 lat (usec): min=154, max=988, avg=293.57, stdev=52.88 00:41:49.303 clat percentiles (usec): 00:41:49.303 | 1.00th=[ 180], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:41:49.303 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:41:49.303 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 355], 00:41:49.303 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 971], 00:41:49.303 | 99.99th=[ 971] 00:41:49.303 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:49.303 slat (nsec): min=14338, max=70626, avg=22487.53, stdev=7074.63 00:41:49.303 clat (usec): min=91, max=556, avg=181.97, stdev=34.14 00:41:49.303 lat (usec): min=111, max=574, avg=204.46, stdev=35.63 00:41:49.303 clat percentiles (usec): 00:41:49.303 | 1.00th=[ 99], 5.00th=[ 111], 10.00th=[ 118], 20.00th=[ 174], 00:41:49.303 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:41:49.303 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:41:49.303 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 351], 00:41:49.303 | 99.99th=[ 553] 00:41:49.303 bw ( KiB/s): min= 8192, max= 8192, per=22.17%, avg=8192.00, stdev= 0.00, samples=1 00:41:49.303 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:49.303 lat (usec) : 100=0.65%, 250=59.20%, 500=39.58%, 750=0.55%, 1000=0.02% 00:41:49.303 cpu : usr=1.30%, sys=6.10%, ctx=4005, majf=0, minf=13 00:41:49.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.303 issued rwts: total=1957,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.303 job1: (groupid=0, jobs=1): err= 0: pid=66130: Mon Dec 9 05:32:35 2024 00:41:49.303 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:41:49.303 slat (nsec): min=8571, max=57602, avg=14916.62, stdev=3450.16 00:41:49.303 clat (usec): min=136, max=1782, avg=212.01, stdev=67.48 00:41:49.303 lat (usec): min=150, max=1797, avg=226.93, stdev=66.91 00:41:49.303 clat percentiles (usec): 00:41:49.303 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:41:49.303 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 221], 00:41:49.303 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 322], 95.00th=[ 338], 00:41:49.303 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 506], 99.95th=[ 545], 00:41:49.303 | 99.99th=[ 1778] 00:41:49.303 write: IOPS=2587, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:41:49.303 slat (nsec): min=11010, max=99299, avg=23136.57, stdev=6805.49 00:41:49.303 clat (usec): min=92, max=523, avg=135.15, stdev=25.62 00:41:49.303 lat (usec): min=114, max=542, avg=158.29, stdev=24.38 00:41:49.303 clat percentiles (usec): 00:41:49.303 | 1.00th=[ 102], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 120], 00:41:49.303 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:41:49.303 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 169], 95.00th=[ 190], 00:41:49.303 | 99.00th=[ 215], 99.50th=[ 229], 99.90th=[ 326], 99.95th=[ 351], 00:41:49.303 | 99.99th=[ 523] 00:41:49.303 bw ( KiB/s): min=12288, max=12288, per=33.26%, avg=12288.00, stdev= 0.00, samples=1 00:41:49.303 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:49.303 lat (usec) : 100=0.33%, 250=87.51%, 500=12.08%, 750=0.06% 00:41:49.303 lat (msec) : 2=0.02% 00:41:49.303 cpu : usr=2.00%, sys=8.00%, ctx=5151, majf=0, minf=13 00:41:49.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.303 issued rwts: total=2560,2590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.303 job2: (groupid=0, jobs=1): err= 0: pid=66131: Mon Dec 9 05:32:35 2024 00:41:49.303 read: IOPS=2411, BW=9646KiB/s (9878kB/s)(9656KiB/1001msec) 00:41:49.303 slat (nsec): min=8366, max=38965, avg=13697.99, stdev=3037.01 00:41:49.303 clat (usec): min=151, max=714, avg=209.51, stdev=48.23 00:41:49.303 lat (usec): min=165, max=727, avg=223.21, stdev=47.89 00:41:49.303 clat percentiles (usec): 00:41:49.303 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:41:49.303 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 200], 00:41:49.303 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 314], 00:41:49.303 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 465], 00:41:49.303 | 99.99th=[ 717] 00:41:49.303 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:49.303 slat (usec): min=13, max=152, avg=20.23, stdev= 4.92 00:41:49.303 clat (usec): min=107, max=5992, avg=156.87, stdev=220.37 00:41:49.303 lat (usec): min=125, max=6010, avg=177.09, stdev=220.70 00:41:49.303 clat percentiles (usec): 00:41:49.303 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 130], 00:41:49.303 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 147], 00:41:49.303 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 188], 00:41:49.304 | 99.00th=[ 225], 99.50th=[ 258], 99.90th=[ 4817], 99.95th=[ 4817], 00:41:49.304 | 99.99th=[ 5997] 00:41:49.304 bw ( KiB/s): min=12288, max=12288, per=33.26%, avg=12288.00, stdev= 0.00, samples=1 00:41:49.304 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:49.304 lat (usec) : 250=89.69%, 500=10.15%, 750=0.02% 00:41:49.304 lat (msec) : 2=0.02%, 4=0.04%, 10=0.08% 00:41:49.304 cpu : usr=1.20%, sys=7.50%, ctx=4976, majf=0, minf=3 00:41:49.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.304 issued rwts: total=2414,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.304 job3: (groupid=0, jobs=1): err= 0: pid=66132: Mon Dec 9 05:32:35 2024 00:41:49.304 read: IOPS=1867, BW=7469KiB/s (7648kB/s)(7476KiB/1001msec) 00:41:49.304 slat (nsec): min=12789, max=81866, avg=15092.77, stdev=3324.87 00:41:49.304 clat (usec): min=190, max=2409, avg=275.42, stdev=61.86 00:41:49.304 lat (usec): min=204, max=2439, avg=290.51, stdev=62.61 00:41:49.304 clat percentiles (usec): 00:41:49.304 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:41:49.304 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:41:49.304 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 343], 00:41:49.304 | 99.00th=[ 408], 99.50th=[ 478], 99.90th=[ 791], 99.95th=[ 2409], 00:41:49.304 | 99.99th=[ 2409] 00:41:49.304 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:49.304 slat (usec): min=14, max=153, avg=22.25, stdev= 5.38 00:41:49.304 clat (usec): min=109, max=1520, avg=197.60, stdev=54.16 00:41:49.304 lat (usec): min=134, max=1539, avg=219.85, stdev=56.20 00:41:49.304 clat percentiles (usec): 00:41:49.304 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 139], 20.00th=[ 178], 00:41:49.304 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:41:49.304 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 233], 95.00th=[ 310], 00:41:49.304 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 400], 99.95th=[ 465], 00:41:49.304 | 99.99th=[ 1516] 00:41:49.304 bw ( KiB/s): min= 8192, max= 8192, per=22.17%, avg=8192.00, stdev= 0.00, samples=1 00:41:49.304 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:49.304 lat (usec) : 250=56.55%, 500=43.30%, 750=0.08%, 1000=0.03% 00:41:49.304 lat (msec) : 2=0.03%, 4=0.03% 00:41:49.304 cpu : usr=2.10%, sys=5.20%, ctx=3918, majf=0, minf=7 00:41:49.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.304 issued rwts: total=1869,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.304 00:41:49.304 Run status group 0 (all jobs): 00:41:49.304 READ: bw=34.3MiB/s (36.0MB/s), 7469KiB/s-9.99MiB/s (7648kB/s-10.5MB/s), io=34.4MiB (36.0MB), run=1001-1001msec 00:41:49.304 WRITE: bw=36.1MiB/s (37.8MB/s), 8184KiB/s-10.1MiB/s (8380kB/s-10.6MB/s), io=36.1MiB (37.9MB), run=1001-1001msec 00:41:49.304 00:41:49.304 Disk stats (read/write): 00:41:49.304 nvme0n1: ios=1585/1950, merge=0/0, ticks=464/372, in_queue=836, util=87.26% 00:41:49.304 nvme0n2: ios=2075/2560, merge=0/0, ticks=415/353, in_queue=768, util=87.36% 00:41:49.304 nvme0n3: ios=2048/2284, merge=0/0, ticks=419/359, in_queue=778, util=88.65% 00:41:49.304 nvme0n4: ios=1536/1754, merge=0/0, ticks=426/365, in_queue=791, util=89.53% 00:41:49.304 05:32:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:49.304 [global] 00:41:49.304 thread=1 00:41:49.304 invalidate=1 00:41:49.304 rw=randwrite 00:41:49.304 time_based=1 00:41:49.304 runtime=1 00:41:49.304 ioengine=libaio 00:41:49.304 direct=1 00:41:49.304 bs=4096 00:41:49.304 iodepth=1 00:41:49.304 norandommap=0 00:41:49.304 numjobs=1 00:41:49.304 00:41:49.304 verify_dump=1 00:41:49.304 verify_backlog=512 00:41:49.304 verify_state_save=0 00:41:49.304 do_verify=1 00:41:49.304 verify=crc32c-intel 00:41:49.304 [job0] 00:41:49.304 filename=/dev/nvme0n1 00:41:49.304 [job1] 00:41:49.304 filename=/dev/nvme0n2 00:41:49.304 [job2] 00:41:49.304 filename=/dev/nvme0n3 00:41:49.304 [job3] 00:41:49.304 filename=/dev/nvme0n4 00:41:49.304 Could not set queue depth (nvme0n1) 00:41:49.304 Could not set queue depth (nvme0n2) 00:41:49.304 Could not set queue depth (nvme0n3) 00:41:49.304 Could not set queue depth (nvme0n4) 00:41:49.304 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:49.304 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:49.304 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:49.304 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:49.304 fio-3.35 00:41:49.304 Starting 4 threads 00:41:50.719 00:41:50.719 job0: (groupid=0, jobs=1): err= 0: pid=66191: Mon Dec 9 05:32:37 2024 00:41:50.719 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:41:50.719 slat (nsec): min=11779, max=36281, avg=14277.60, stdev=2339.32 00:41:50.719 clat (usec): min=131, max=271, avg=163.95, stdev=12.87 00:41:50.719 lat (usec): min=144, max=285, avg=178.23, stdev=13.15 00:41:50.719 clat percentiles (usec): 00:41:50.719 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:41:50.719 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:41:50.719 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:41:50.719 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 223], 99.95th=[ 258], 00:41:50.719 | 99.99th=[ 273] 00:41:50.719 write: IOPS=3151, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:41:50.719 slat (usec): min=14, max=114, avg=20.50, stdev= 4.44 00:41:50.719 clat (usec): min=91, max=489, avg=119.58, stdev=13.02 00:41:50.719 lat (usec): min=110, max=518, avg=140.08, stdev=13.92 00:41:50.719 clat percentiles (usec): 00:41:50.719 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:41:50.719 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 122], 00:41:50.719 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 139], 00:41:50.719 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 227], 99.95th=[ 285], 00:41:50.719 | 99.99th=[ 490] 00:41:50.719 bw ( KiB/s): min=12288, max=12288, per=39.61%, avg=12288.00, stdev= 0.00, samples=1 00:41:50.719 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:50.719 lat (usec) : 100=0.88%, 250=99.04%, 500=0.08% 00:41:50.719 cpu : usr=2.20%, sys=8.60%, ctx=6228, majf=0, minf=11 00:41:50.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:50.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.719 issued rwts: total=3072,3155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:50.719 job1: (groupid=0, jobs=1): err= 0: pid=66192: Mon Dec 9 05:32:37 2024 00:41:50.719 read: IOPS=1391, BW=5566KiB/s (5700kB/s)(5572KiB/1001msec) 00:41:50.719 slat (nsec): min=9545, max=46897, avg=15384.19, stdev=4499.32 00:41:50.719 clat (usec): min=204, max=650, avg=345.38, stdev=34.49 00:41:50.719 lat (usec): min=219, max=669, avg=360.76, stdev=34.76 00:41:50.719 clat percentiles (usec): 00:41:50.719 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 330], 00:41:50.719 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:41:50.719 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 396], 00:41:50.719 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 611], 99.95th=[ 652], 00:41:50.719 | 99.99th=[ 652] 00:41:50.719 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:50.719 slat (nsec): min=12401, max=91715, avg=25036.08, stdev=9652.61 00:41:50.719 clat (usec): min=160, max=1728, avg=295.01, stdev=69.97 00:41:50.719 lat (usec): min=176, max=1745, avg=320.04, stdev=74.23 00:41:50.719 clat percentiles (usec): 00:41:50.719 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 229], 20.00th=[ 265], 00:41:50.719 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:41:50.719 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 388], 95.00th=[ 412], 00:41:50.719 | 99.00th=[ 457], 99.50th=[ 494], 99.90th=[ 660], 99.95th=[ 1729], 00:41:50.719 | 99.99th=[ 1729] 00:41:50.719 bw ( KiB/s): min= 7408, max= 7408, per=23.88%, avg=7408.00, stdev= 0.00, samples=1 00:41:50.719 iops : min= 1852, max= 1852, avg=1852.00, stdev= 0.00, samples=1 00:41:50.719 lat (usec) : 250=7.44%, 500=92.15%, 750=0.38% 00:41:50.719 lat (msec) : 2=0.03% 00:41:50.719 cpu : usr=1.90%, sys=4.60%, ctx=2930, majf=0, minf=9 00:41:50.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:50.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.719 issued rwts: total=1393,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:50.719 job2: (groupid=0, jobs=1): err= 0: pid=66193: Mon Dec 9 05:32:37 2024 00:41:50.719 read: IOPS=1424, BW=5698KiB/s (5835kB/s)(5704KiB/1001msec) 00:41:50.719 slat (usec): min=13, max=274, avg=24.31, stdev= 8.86 00:41:50.719 clat (usec): min=159, max=2699, avg=376.33, stdev=107.48 00:41:50.719 lat (usec): min=180, max=2731, avg=400.64, stdev=109.91 00:41:50.719 clat percentiles (usec): 00:41:50.719 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:41:50.719 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:41:50.719 | 70.00th=[ 359], 80.00th=[ 420], 90.00th=[ 486], 95.00th=[ 562], 00:41:50.719 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 1074], 99.95th=[ 2704], 00:41:50.719 | 99.99th=[ 2704] 00:41:50.719 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:50.719 slat (nsec): min=20329, max=79822, avg=32322.20, stdev=5931.33 00:41:50.719 clat (usec): min=106, max=410, avg=240.93, stdev=52.21 00:41:50.719 lat (usec): min=133, max=440, avg=273.25, stdev=54.11 00:41:50.719 clat percentiles (usec): 00:41:50.719 | 1.00th=[ 122], 5.00th=[ 131], 10.00th=[ 139], 20.00th=[ 229], 00:41:50.719 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:41:50.719 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:41:50.719 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 412], 00:41:50.719 | 99.99th=[ 412] 00:41:50.719 bw ( KiB/s): min= 8192, max= 8192, per=26.41%, avg=8192.00, stdev= 0.00, samples=1 00:41:50.719 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:50.719 lat (usec) : 250=21.10%, 500=75.25%, 750=3.51%, 1000=0.07% 00:41:50.719 lat (msec) : 2=0.03%, 4=0.03% 00:41:50.719 cpu : usr=2.40%, sys=6.30%, ctx=2963, majf=0, minf=11 00:41:50.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:50.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.720 issued rwts: total=1426,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:50.720 job3: (groupid=0, jobs=1): err= 0: pid=66194: Mon Dec 9 05:32:37 2024 00:41:50.720 read: IOPS=1392, BW=5570KiB/s (5704kB/s)(5576KiB/1001msec) 00:41:50.720 slat (nsec): min=10053, max=61809, avg=17448.38, stdev=6545.80 00:41:50.720 clat (usec): min=183, max=600, avg=343.01, stdev=31.99 00:41:50.720 lat (usec): min=218, max=615, avg=360.46, stdev=33.58 00:41:50.720 clat percentiles (usec): 00:41:50.720 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 330], 00:41:50.720 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:41:50.720 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 388], 00:41:50.720 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 594], 99.95th=[ 603], 00:41:50.720 | 99.99th=[ 603] 00:41:50.720 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:50.720 slat (usec): min=12, max=109, avg=30.16, stdev=10.13 00:41:50.720 clat (usec): min=154, max=1819, avg=289.50, stdev=70.60 00:41:50.720 lat (usec): min=178, max=1841, avg=319.66, stdev=75.54 00:41:50.720 clat percentiles (usec): 00:41:50.720 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 233], 20.00th=[ 258], 00:41:50.720 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:41:50.720 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 392], 95.00th=[ 408], 00:41:50.720 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 758], 99.95th=[ 1827], 00:41:50.720 | 99.99th=[ 1827] 00:41:50.720 bw ( KiB/s): min= 7416, max= 7416, per=23.91%, avg=7416.00, stdev= 0.00, samples=1 00:41:50.720 iops : min= 1854, max= 1854, avg=1854.00, stdev= 0.00, samples=1 00:41:50.720 lat (usec) : 250=7.68%, 500=92.05%, 750=0.20%, 1000=0.03% 00:41:50.720 lat (msec) : 2=0.03% 00:41:50.720 cpu : usr=1.40%, sys=6.30%, ctx=2930, majf=0, minf=15 00:41:50.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:50.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.720 issued rwts: total=1394,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:50.720 00:41:50.720 Run status group 0 (all jobs): 00:41:50.720 READ: bw=28.4MiB/s (29.8MB/s), 5566KiB/s-12.0MiB/s (5700kB/s-12.6MB/s), io=28.5MiB (29.8MB), run=1001-1001msec 00:41:50.720 WRITE: bw=30.3MiB/s (31.8MB/s), 6138KiB/s-12.3MiB/s (6285kB/s-12.9MB/s), io=30.3MiB (31.8MB), run=1001-1001msec 00:41:50.720 00:41:50.720 Disk stats (read/write): 00:41:50.720 nvme0n1: ios=2610/2898, merge=0/0, ticks=459/362, in_queue=821, util=89.48% 00:41:50.720 nvme0n2: ios=1089/1536, merge=0/0, ticks=348/414, in_queue=762, util=88.90% 00:41:50.720 nvme0n3: ios=1103/1536, merge=0/0, ticks=497/381, in_queue=878, util=90.07% 00:41:50.720 nvme0n4: ios=1043/1536, merge=0/0, ticks=359/437, in_queue=796, util=89.72% 00:41:50.720 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:50.720 [global] 00:41:50.720 thread=1 00:41:50.720 invalidate=1 00:41:50.720 rw=write 00:41:50.720 time_based=1 00:41:50.720 runtime=1 00:41:50.720 ioengine=libaio 00:41:50.720 direct=1 00:41:50.720 bs=4096 00:41:50.720 iodepth=128 00:41:50.720 norandommap=0 00:41:50.720 numjobs=1 00:41:50.720 00:41:50.720 verify_dump=1 00:41:50.720 verify_backlog=512 00:41:50.720 verify_state_save=0 00:41:50.720 do_verify=1 00:41:50.720 verify=crc32c-intel 00:41:50.720 [job0] 00:41:50.720 filename=/dev/nvme0n1 00:41:50.720 [job1] 00:41:50.720 filename=/dev/nvme0n2 00:41:50.720 [job2] 00:41:50.720 filename=/dev/nvme0n3 00:41:50.720 [job3] 00:41:50.720 filename=/dev/nvme0n4 00:41:50.720 Could not set queue depth (nvme0n1) 00:41:50.720 Could not set queue depth (nvme0n2) 00:41:50.720 Could not set queue depth (nvme0n3) 00:41:50.720 Could not set queue depth (nvme0n4) 00:41:50.720 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:50.720 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:50.720 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:50.720 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:50.720 fio-3.35 00:41:50.720 Starting 4 threads 00:41:52.093 00:41:52.093 job0: (groupid=0, jobs=1): err= 0: pid=66248: Mon Dec 9 05:32:38 2024 00:41:52.093 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:41:52.093 slat (usec): min=6, max=6709, avg=186.68, stdev=770.66 00:41:52.093 clat (usec): min=11200, max=48436, avg=22977.62, stdev=6486.45 00:41:52.093 lat (usec): min=12068, max=48446, avg=23164.30, stdev=6556.18 00:41:52.093 clat percentiles (usec): 00:41:52.093 | 1.00th=[13042], 5.00th=[13698], 10.00th=[14091], 20.00th=[15270], 00:41:52.093 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21365], 60.00th=[24249], 00:41:52.093 | 70.00th=[26870], 80.00th=[30540], 90.00th=[31327], 95.00th=[31589], 00:41:52.093 | 99.00th=[39060], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:41:52.093 | 99.99th=[48497] 00:41:52.093 write: IOPS=2242, BW=8969KiB/s (9184kB/s)(9032KiB/1007msec); 0 zone resets 00:41:52.093 slat (usec): min=11, max=12021, avg=266.31, stdev=882.64 00:41:52.093 clat (usec): min=3675, max=62242, avg=35429.87, stdev=12670.62 00:41:52.093 lat (usec): min=7452, max=62266, avg=35696.17, stdev=12739.12 00:41:52.093 clat percentiles (usec): 00:41:52.093 | 1.00th=[15664], 5.00th=[20317], 10.00th=[20841], 20.00th=[22152], 00:41:52.093 | 30.00th=[25560], 40.00th=[28967], 50.00th=[34866], 60.00th=[39060], 00:41:52.093 | 70.00th=[41157], 80.00th=[47449], 90.00th=[55313], 95.00th=[58983], 00:41:52.093 | 99.00th=[61080], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:41:52.093 | 99.99th=[62129] 00:41:52.093 bw ( KiB/s): min= 8192, max= 8848, per=17.52%, avg=8520.00, stdev=463.86, samples=2 00:41:52.093 iops : min= 2048, max= 2212, avg=2130.00, stdev=115.97, samples=2 00:41:52.093 lat (msec) : 4=0.02%, 10=0.19%, 20=13.82%, 50=77.59%, 100=8.38% 00:41:52.093 cpu : usr=2.68%, sys=6.36%, ctx=321, majf=0, minf=13 00:41:52.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:41:52.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.094 issued rwts: total=2048,2258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.094 job1: (groupid=0, jobs=1): err= 0: pid=66250: Mon Dec 9 05:32:38 2024 00:41:52.094 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1002msec) 00:41:52.094 slat (usec): min=7, max=5607, avg=174.81, stdev=623.36 00:41:52.094 clat (usec): min=791, max=31060, avg=21462.70, stdev=3471.94 00:41:52.094 lat (usec): min=2694, max=31074, avg=21637.51, stdev=3478.78 00:41:52.094 clat percentiles (usec): 00:41:52.094 | 1.00th=[ 7570], 5.00th=[16057], 10.00th=[17957], 20.00th=[19530], 00:41:52.094 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:41:52.094 | 70.00th=[22152], 80.00th=[23725], 90.00th=[25560], 95.00th=[26608], 00:41:52.094 | 99.00th=[29754], 99.50th=[30016], 99.90th=[31065], 99.95th=[31065], 00:41:52.094 | 99.99th=[31065] 00:41:52.094 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:41:52.094 slat (usec): min=4, max=10436, avg=139.26, stdev=631.20 00:41:52.094 clat (usec): min=3364, max=39038, avg=19462.39, stdev=4169.04 00:41:52.094 lat (usec): min=3385, max=40097, avg=19601.65, stdev=4187.48 00:41:52.094 clat percentiles (usec): 00:41:52.094 | 1.00th=[ 6849], 5.00th=[13698], 10.00th=[14877], 20.00th=[15401], 00:41:52.094 | 30.00th=[17171], 40.00th=[19268], 50.00th=[20055], 60.00th=[20841], 00:41:52.094 | 70.00th=[21890], 80.00th=[22414], 90.00th=[23987], 95.00th=[24773], 00:41:52.094 | 99.00th=[28181], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:41:52.094 | 99.99th=[39060] 00:41:52.094 bw ( KiB/s): min=12288, max=12312, per=25.29%, avg=12300.00, stdev=16.97, samples=2 00:41:52.094 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:41:52.094 lat (usec) : 1000=0.02% 00:41:52.094 lat (msec) : 4=0.24%, 10=1.45%, 20=32.78%, 50=65.51% 00:41:52.094 cpu : usr=2.50%, sys=8.59%, ctx=794, majf=0, minf=15 00:41:52.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:41:52.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.094 issued rwts: total=3063,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.094 job2: (groupid=0, jobs=1): err= 0: pid=66251: Mon Dec 9 05:32:38 2024 00:41:52.094 read: IOPS=2955, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1002msec) 00:41:52.094 slat (usec): min=4, max=6799, avg=167.06, stdev=636.55 00:41:52.094 clat (usec): min=1080, max=30840, avg=21185.07, stdev=2944.12 00:41:52.094 lat (usec): min=3195, max=30863, avg=21352.13, stdev=2933.83 00:41:52.094 clat percentiles (usec): 00:41:52.094 | 1.00th=[ 7504], 5.00th=[17695], 10.00th=[18744], 20.00th=[19792], 00:41:52.094 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:41:52.094 | 70.00th=[21890], 80.00th=[22414], 90.00th=[24249], 95.00th=[25822], 00:41:52.094 | 99.00th=[27132], 99.50th=[27132], 99.90th=[29230], 99.95th=[30802], 00:41:52.094 | 99.99th=[30802] 00:41:52.094 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:41:52.094 slat (usec): min=5, max=6574, avg=157.44, stdev=637.21 00:41:52.094 clat (usec): min=10432, max=30487, avg=20819.97, stdev=4128.27 00:41:52.094 lat (usec): min=10451, max=30507, avg=20977.41, stdev=4142.02 00:41:52.094 clat percentiles (usec): 00:41:52.094 | 1.00th=[10421], 5.00th=[10945], 10.00th=[15533], 20.00th=[18744], 00:41:52.094 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20841], 60.00th=[21890], 00:41:52.094 | 70.00th=[22414], 80.00th=[23200], 90.00th=[25297], 95.00th=[27657], 00:41:52.094 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:41:52.094 | 99.99th=[30540] 00:41:52.094 bw ( KiB/s): min=12288, max=12312, per=25.29%, avg=12300.00, stdev=16.97, samples=2 00:41:52.094 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:41:52.094 lat (msec) : 2=0.02%, 4=0.27%, 10=0.43%, 20=26.42%, 50=72.87% 00:41:52.094 cpu : usr=2.70%, sys=7.99%, ctx=867, majf=0, minf=8 00:41:52.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:41:52.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.094 issued rwts: total=2961,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.094 job3: (groupid=0, jobs=1): err= 0: pid=66252: Mon Dec 9 05:32:38 2024 00:41:52.094 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:41:52.094 slat (usec): min=7, max=5344, avg=131.25, stdev=542.40 00:41:52.094 clat (usec): min=11974, max=28752, avg=17141.87, stdev=2684.61 00:41:52.094 lat (usec): min=11995, max=30343, avg=17273.13, stdev=2732.96 00:41:52.094 clat percentiles (usec): 00:41:52.094 | 1.00th=[12649], 5.00th=[14091], 10.00th=[14746], 20.00th=[15008], 00:41:52.094 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15795], 60.00th=[17171], 00:41:52.094 | 70.00th=[18482], 80.00th=[20317], 90.00th=[21103], 95.00th=[21627], 00:41:52.094 | 99.00th=[24249], 99.50th=[25822], 99.90th=[28705], 99.95th=[28705], 00:41:52.094 | 99.99th=[28705] 00:41:52.094 write: IOPS=3835, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec); 0 zone resets 00:41:52.094 slat (usec): min=9, max=11956, avg=129.60, stdev=623.48 00:41:52.094 clat (usec): min=686, max=38883, avg=16700.29, stdev=5847.33 00:41:52.094 lat (usec): min=3637, max=38908, avg=16829.89, stdev=5907.83 00:41:52.094 clat percentiles (usec): 00:41:52.094 | 1.00th=[ 4555], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:41:52.094 | 30.00th=[13042], 40.00th=[13435], 50.00th=[15270], 60.00th=[15664], 00:41:52.094 | 70.00th=[17171], 80.00th=[20841], 90.00th=[23462], 95.00th=[28967], 00:41:52.094 | 99.00th=[37487], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:41:52.094 | 99.99th=[39060] 00:41:52.094 bw ( KiB/s): min=16376, max=16376, per=33.67%, avg=16376.00, stdev= 0.00, samples=1 00:41:52.094 iops : min= 4094, max= 4094, avg=4094.00, stdev= 0.00, samples=1 00:41:52.094 lat (usec) : 750=0.01% 00:41:52.094 lat (msec) : 4=0.20%, 10=0.93%, 20=74.61%, 50=24.25% 00:41:52.094 cpu : usr=4.90%, sys=10.09%, ctx=315, majf=0, minf=17 00:41:52.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:52.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.094 issued rwts: total=3584,3843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.094 00:41:52.094 Run status group 0 (all jobs): 00:41:52.094 READ: bw=45.2MiB/s (47.4MB/s), 8135KiB/s-14.0MiB/s (8330kB/s-14.7MB/s), io=45.5MiB (47.7MB), run=1002-1007msec 00:41:52.094 WRITE: bw=47.5MiB/s (49.8MB/s), 8969KiB/s-15.0MiB/s (9184kB/s-15.7MB/s), io=47.8MiB (50.2MB), run=1002-1007msec 00:41:52.094 00:41:52.094 Disk stats (read/write): 00:41:52.094 nvme0n1: ios=1694/2048, merge=0/0, ticks=12071/23046, in_queue=35117, util=88.48% 00:41:52.094 nvme0n2: ios=2609/2764, merge=0/0, ticks=18151/16937, in_queue=35088, util=88.17% 00:41:52.094 nvme0n3: ios=2560/2651, merge=0/0, ticks=17484/15727, in_queue=33211, util=88.54% 00:41:52.094 nvme0n4: ios=3072/3202, merge=0/0, ticks=16973/16425, in_queue=33398, util=89.60% 00:41:52.094 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:52.094 [global] 00:41:52.094 thread=1 00:41:52.094 invalidate=1 00:41:52.094 rw=randwrite 00:41:52.094 time_based=1 00:41:52.094 runtime=1 00:41:52.094 ioengine=libaio 00:41:52.094 direct=1 00:41:52.094 bs=4096 00:41:52.094 iodepth=128 00:41:52.094 norandommap=0 00:41:52.094 numjobs=1 00:41:52.094 00:41:52.094 verify_dump=1 00:41:52.094 verify_backlog=512 00:41:52.094 verify_state_save=0 00:41:52.094 do_verify=1 00:41:52.094 verify=crc32c-intel 00:41:52.094 [job0] 00:41:52.094 filename=/dev/nvme0n1 00:41:52.094 [job1] 00:41:52.094 filename=/dev/nvme0n2 00:41:52.094 [job2] 00:41:52.094 filename=/dev/nvme0n3 00:41:52.094 [job3] 00:41:52.094 filename=/dev/nvme0n4 00:41:52.094 Could not set queue depth (nvme0n1) 00:41:52.094 Could not set queue depth (nvme0n2) 00:41:52.094 Could not set queue depth (nvme0n3) 00:41:52.094 Could not set queue depth (nvme0n4) 00:41:52.094 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.094 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.094 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.094 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.094 fio-3.35 00:41:52.094 Starting 4 threads 00:41:53.477 00:41:53.477 job0: (groupid=0, jobs=1): err= 0: pid=66311: Mon Dec 9 05:32:40 2024 00:41:53.477 read: IOPS=2660, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1010msec) 00:41:53.477 slat (usec): min=10, max=17360, avg=181.06, stdev=1379.85 00:41:53.477 clat (usec): min=620, max=39571, avg=23786.10, stdev=3058.06 00:41:53.477 lat (usec): min=11173, max=45157, avg=23967.17, stdev=3264.57 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[11600], 5.00th=[19268], 10.00th=[21890], 20.00th=[22938], 00:41:53.477 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:41:53.477 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27657], 95.00th=[28443], 00:41:53.477 | 99.00th=[31589], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:41:53.477 | 99.99th=[39584] 00:41:53.477 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:41:53.477 slat (usec): min=15, max=12952, avg=161.81, stdev=1084.23 00:41:53.477 clat (usec): min=9804, max=29460, avg=20845.05, stdev=3056.93 00:41:53.477 lat (usec): min=9853, max=29506, avg=21006.86, stdev=2900.34 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[10028], 5.00th=[11338], 10.00th=[17171], 20.00th=[20317], 00:41:53.477 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:41:53.477 | 70.00th=[22152], 80.00th=[22414], 90.00th=[22938], 95.00th=[24773], 00:41:53.477 | 99.00th=[25560], 99.50th=[25560], 99.90th=[27657], 99.95th=[28443], 00:41:53.477 | 99.99th=[29492] 00:41:53.477 bw ( KiB/s): min=12280, max=12288, per=19.35%, avg=12284.00, stdev= 5.66, samples=2 00:41:53.477 iops : min= 3070, max= 3072, avg=3071.00, stdev= 1.41, samples=2 00:41:53.477 lat (usec) : 750=0.02% 00:41:53.477 lat (msec) : 10=0.42%, 20=11.77%, 50=87.79% 00:41:53.477 cpu : usr=2.87%, sys=8.42%, ctx=119, majf=0, minf=5 00:41:53.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:53.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:53.477 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:53.477 job1: (groupid=0, jobs=1): err= 0: pid=66312: Mon Dec 9 05:32:40 2024 00:41:53.477 read: IOPS=2604, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1007msec) 00:41:53.477 slat (usec): min=12, max=12236, avg=162.90, stdev=1045.03 00:41:53.477 clat (usec): min=1457, max=41927, avg=23103.71, stdev=3639.18 00:41:53.477 lat (usec): min=9751, max=47208, avg=23266.61, stdev=3607.10 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[10159], 5.00th=[15795], 10.00th=[19268], 20.00th=[22414], 00:41:53.477 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:41:53.477 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[26084], 00:41:53.477 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:53.477 | 99.99th=[41681] 00:41:53.477 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:41:53.477 slat (usec): min=5, max=21256, avg=179.89, stdev=1190.17 00:41:53.477 clat (usec): min=10913, max=34073, avg=21817.91, stdev=2564.09 00:41:53.477 lat (usec): min=10986, max=34099, avg=21997.81, stdev=2349.60 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[12780], 5.00th=[19530], 10.00th=[20055], 20.00th=[20579], 00:41:53.477 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[21890], 00:41:53.477 | 70.00th=[22152], 80.00th=[22414], 90.00th=[22938], 95.00th=[25822], 00:41:53.477 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:41:53.477 | 99.99th=[33817] 00:41:53.477 bw ( KiB/s): min=11768, max=12312, per=18.96%, avg=12040.00, stdev=384.67, samples=2 00:41:53.477 iops : min= 2942, max= 3078, avg=3010.00, stdev=96.17, samples=2 00:41:53.477 lat (msec) : 2=0.02%, 10=0.21%, 20=10.24%, 50=89.53% 00:41:53.477 cpu : usr=4.08%, sys=7.46%, ctx=120, majf=0, minf=15 00:41:53.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:53.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:53.477 issued rwts: total=2623,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:53.477 job2: (groupid=0, jobs=1): err= 0: pid=66314: Mon Dec 9 05:32:40 2024 00:41:53.477 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:41:53.477 slat (usec): min=8, max=7410, avg=100.62, stdev=644.28 00:41:53.477 clat (usec): min=8013, max=21825, avg=14011.60, stdev=1491.88 00:41:53.477 lat (usec): min=8028, max=26498, avg=14112.22, stdev=1523.63 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[ 8848], 5.00th=[12518], 10.00th=[13173], 20.00th=[13566], 00:41:53.477 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:41:53.477 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14877], 95.00th=[15270], 00:41:53.477 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:41:53.477 | 99.99th=[21890] 00:41:53.477 write: IOPS=4923, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1002msec); 0 zone resets 00:41:53.477 slat (usec): min=9, max=9831, avg=101.14, stdev=609.78 00:41:53.477 clat (usec): min=1316, max=18261, avg=12649.79, stdev=1456.75 00:41:53.477 lat (usec): min=1335, max=18284, avg=12750.93, stdev=1353.14 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[ 7177], 5.00th=[10683], 10.00th=[11600], 20.00th=[11863], 00:41:53.477 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13042], 00:41:53.477 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:41:53.477 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:41:53.477 | 99.99th=[18220] 00:41:53.477 bw ( KiB/s): min=17968, max=20521, per=30.31%, avg=19244.50, stdev=1805.24, samples=2 00:41:53.477 iops : min= 4492, max= 5130, avg=4811.00, stdev=451.13, samples=2 00:41:53.477 lat (msec) : 2=0.07%, 10=3.65%, 20=95.50%, 50=0.78% 00:41:53.477 cpu : usr=3.70%, sys=13.39%, ctx=196, majf=0, minf=13 00:41:53.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:41:53.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:53.477 issued rwts: total=4608,4933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:53.477 job3: (groupid=0, jobs=1): err= 0: pid=66315: Mon Dec 9 05:32:40 2024 00:41:53.477 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:41:53.477 slat (usec): min=7, max=7870, avg=100.72, stdev=624.56 00:41:53.477 clat (usec): min=6421, max=24211, avg=14021.11, stdev=1610.74 00:41:53.477 lat (usec): min=6432, max=25681, avg=14121.83, stdev=1624.01 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[ 8848], 5.00th=[12387], 10.00th=[13042], 20.00th=[13435], 00:41:53.477 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:41:53.477 | 70.00th=[14353], 80.00th=[14615], 90.00th=[14877], 95.00th=[15270], 00:41:53.477 | 99.00th=[20841], 99.50th=[21365], 99.90th=[24249], 99.95th=[24249], 00:41:53.477 | 99.99th=[24249] 00:41:53.477 write: IOPS=4930, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1005msec); 0 zone resets 00:41:53.477 slat (usec): min=5, max=13274, avg=101.49, stdev=644.09 00:41:53.477 clat (usec): min=805, max=24749, avg=12674.83, stdev=1772.76 00:41:53.477 lat (usec): min=5022, max=24781, avg=12776.32, stdev=1704.26 00:41:53.477 clat percentiles (usec): 00:41:53.477 | 1.00th=[ 6718], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[11731], 00:41:53.477 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:41:53.477 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[15401], 00:41:53.477 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:41:53.477 | 99.99th=[24773] 00:41:53.477 bw ( KiB/s): min=18136, max=20521, per=30.44%, avg=19328.50, stdev=1686.45, samples=2 00:41:53.477 iops : min= 4534, max= 5130, avg=4832.00, stdev=421.44, samples=2 00:41:53.477 lat (usec) : 1000=0.01% 00:41:53.477 lat (msec) : 10=4.55%, 20=94.75%, 50=0.69% 00:41:53.477 cpu : usr=4.78%, sys=12.15%, ctx=205, majf=0, minf=17 00:41:53.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:41:53.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:53.477 issued rwts: total=4608,4955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:53.477 00:41:53.477 Run status group 0 (all jobs): 00:41:53.477 READ: bw=56.2MiB/s (58.9MB/s), 10.2MiB/s-18.0MiB/s (10.7MB/s-18.8MB/s), io=56.7MiB (59.5MB), run=1002-1010msec 00:41:53.477 WRITE: bw=62.0MiB/s (65.0MB/s), 11.9MiB/s-19.3MiB/s (12.5MB/s-20.2MB/s), io=62.6MiB (65.7MB), run=1002-1010msec 00:41:53.477 00:41:53.477 Disk stats (read/write): 00:41:53.477 nvme0n1: ios=2231/2560, merge=0/0, ticks=50790/50214, in_queue=101004, util=87.22% 00:41:53.477 nvme0n2: ios=2177/2560, merge=0/0, ticks=48634/52812, in_queue=101446, util=87.42% 00:41:53.477 nvme0n3: ios=3916/4096, merge=0/0, ticks=51696/48730, in_queue=100426, util=89.06% 00:41:53.477 nvme0n4: ios=3942/4096, merge=0/0, ticks=52248/47697, in_queue=99945, util=89.20% 00:41:53.477 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:53.477 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66328 00:41:53.477 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:53.477 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:53.477 [global] 00:41:53.477 thread=1 00:41:53.477 invalidate=1 00:41:53.477 rw=read 00:41:53.477 time_based=1 00:41:53.477 runtime=10 00:41:53.477 ioengine=libaio 00:41:53.477 direct=1 00:41:53.477 bs=4096 00:41:53.477 iodepth=1 00:41:53.477 norandommap=1 00:41:53.477 numjobs=1 00:41:53.477 00:41:53.477 [job0] 00:41:53.477 filename=/dev/nvme0n1 00:41:53.477 [job1] 00:41:53.477 filename=/dev/nvme0n2 00:41:53.477 [job2] 00:41:53.477 filename=/dev/nvme0n3 00:41:53.477 [job3] 00:41:53.477 filename=/dev/nvme0n4 00:41:53.477 Could not set queue depth (nvme0n1) 00:41:53.477 Could not set queue depth (nvme0n2) 00:41:53.477 Could not set queue depth (nvme0n3) 00:41:53.477 Could not set queue depth (nvme0n4) 00:41:53.477 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:53.477 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:53.477 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:53.477 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:53.477 fio-3.35 00:41:53.477 Starting 4 threads 00:41:56.763 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:56.763 fio: pid=66377, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:56.763 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=53694464, buflen=4096 00:41:56.763 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:56.763 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68677632, buflen=4096 00:41:56.763 fio: pid=66376, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:57.022 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.022 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:57.022 fio: pid=66374, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:57.022 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2084864, buflen=4096 00:41:57.281 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.281 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:57.281 fio: pid=66375, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:57.281 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=20213760, buflen=4096 00:41:57.541 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.541 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:57.541 00:41:57.541 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66374: Mon Dec 9 05:32:44 2024 00:41:57.541 read: IOPS=4768, BW=18.6MiB/s (19.5MB/s)(66.0MiB/3543msec) 00:41:57.541 slat (usec): min=7, max=10976, avg=14.60, stdev=146.61 00:41:57.541 clat (usec): min=2, max=2736, avg=193.90, stdev=61.84 00:41:57.541 lat (usec): min=140, max=11182, avg=208.50, stdev=158.88 00:41:57.541 clat percentiles (usec): 00:41:57.541 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:41:57.541 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:41:57.541 | 70.00th=[ 233], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 273], 00:41:57.541 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 709], 99.95th=[ 1123], 00:41:57.541 | 99.99th=[ 2343] 00:41:57.541 bw ( KiB/s): min=14286, max=22632, per=25.99%, avg=18555.67, stdev=3960.87, samples=6 00:41:57.541 iops : min= 3571, max= 5658, avg=4638.83, stdev=990.33, samples=6 00:41:57.541 lat (usec) : 4=0.01%, 250=80.30%, 500=19.46%, 750=0.15%, 1000=0.02% 00:41:57.541 lat (msec) : 2=0.04%, 4=0.02% 00:41:57.541 cpu : usr=1.44%, sys=5.28%, ctx=16902, majf=0, minf=1 00:41:57.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.541 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.541 issued rwts: total=16894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.541 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66375: Mon Dec 9 05:32:44 2024 00:41:57.541 read: IOPS=5588, BW=21.8MiB/s (22.9MB/s)(83.3MiB/3815msec) 00:41:57.541 slat (usec): min=10, max=10799, avg=17.08, stdev=151.66 00:41:57.541 clat (usec): min=88, max=1702, avg=160.38, stdev=23.75 00:41:57.541 lat (usec): min=129, max=10963, avg=177.45, stdev=154.05 00:41:57.541 clat percentiles (usec): 00:41:57.541 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:41:57.541 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:41:57.541 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:41:57.541 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 297], 99.95th=[ 506], 00:41:57.541 | 99.99th=[ 1287] 00:41:57.541 bw ( KiB/s): min=21528, max=22696, per=31.18%, avg=22259.14, stdev=410.65, samples=7 00:41:57.542 iops : min= 5382, max= 5674, avg=5564.71, stdev=102.67, samples=7 00:41:57.542 lat (usec) : 100=0.01%, 250=99.83%, 500=0.11%, 750=0.04% 00:41:57.542 lat (msec) : 2=0.01% 00:41:57.542 cpu : usr=1.68%, sys=7.47%, ctx=21331, majf=0, minf=1 00:41:57.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.542 issued rwts: total=21320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.542 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66376: Mon Dec 9 05:32:44 2024 00:41:57.542 read: IOPS=5101, BW=19.9MiB/s (20.9MB/s)(65.5MiB/3287msec) 00:41:57.542 slat (usec): min=11, max=9826, avg=15.17, stdev=99.38 00:41:57.542 clat (usec): min=140, max=3508, avg=179.53, stdev=39.82 00:41:57.542 lat (usec): min=155, max=10030, avg=194.70, stdev=107.27 00:41:57.542 clat percentiles (usec): 00:41:57.542 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:41:57.542 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:41:57.542 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:41:57.542 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 506], 99.95th=[ 775], 00:41:57.542 | 99.99th=[ 2114] 00:41:57.542 bw ( KiB/s): min=20216, max=20896, per=28.77%, avg=20537.00, stdev=269.55, samples=6 00:41:57.542 iops : min= 5054, max= 5224, avg=5134.17, stdev=67.49, samples=6 00:41:57.542 lat (usec) : 250=99.67%, 500=0.22%, 750=0.05%, 1000=0.02% 00:41:57.542 lat (msec) : 2=0.02%, 4=0.01% 00:41:57.542 cpu : usr=1.16%, sys=6.42%, ctx=16771, majf=0, minf=2 00:41:57.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.542 issued rwts: total=16768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.542 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66377: Mon Dec 9 05:32:44 2024 00:41:57.542 read: IOPS=4400, BW=17.2MiB/s (18.0MB/s)(51.2MiB/2979msec) 00:41:57.542 slat (nsec): min=7675, max=98491, avg=13285.48, stdev=3064.67 00:41:57.542 clat (usec): min=137, max=7489, avg=212.66, stdev=131.19 00:41:57.542 lat (usec): min=159, max=7506, avg=225.94, stdev=131.08 00:41:57.542 clat percentiles (usec): 00:41:57.542 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:41:57.542 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 223], 00:41:57.542 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:41:57.542 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 1074], 99.95th=[ 2540], 00:41:57.542 | 99.99th=[ 6980] 00:41:57.542 bw ( KiB/s): min=14992, max=20848, per=25.46%, avg=18177.60, stdev=2947.55, samples=5 00:41:57.542 iops : min= 3748, max= 5212, avg=4544.40, stdev=736.89, samples=5 00:41:57.542 lat (usec) : 250=76.64%, 500=23.14%, 750=0.10%, 1000=0.01% 00:41:57.542 lat (msec) : 2=0.02%, 4=0.06%, 10=0.03% 00:41:57.542 cpu : usr=1.07%, sys=5.67%, ctx=13112, majf=0, minf=2 00:41:57.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.542 issued rwts: total=13110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.542 00:41:57.542 Run status group 0 (all jobs): 00:41:57.542 READ: bw=69.7MiB/s (73.1MB/s), 17.2MiB/s-21.8MiB/s (18.0MB/s-22.9MB/s), io=266MiB (279MB), run=2979-3815msec 00:41:57.542 00:41:57.542 Disk stats (read/write): 00:41:57.542 nvme0n1: ios=15851/0, merge=0/0, ticks=3055/0, in_queue=3055, util=95.42% 00:41:57.542 nvme0n2: ios=20096/0, merge=0/0, ticks=3257/0, in_queue=3257, util=95.58% 00:41:57.542 nvme0n3: ios=15901/0, merge=0/0, ticks=2886/0, in_queue=2886, util=96.33% 00:41:57.542 nvme0n4: ios=12701/0, merge=0/0, ticks=2639/0, in_queue=2639, util=96.39% 00:41:57.802 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.802 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:58.061 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:58.061 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:58.320 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:58.320 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:58.577 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:58.577 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:58.834 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:58.834 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66328 00:41:58.834 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:58.834 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:58.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:58.835 nvmf hotplug test: fio failed as expected 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:58.835 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:59.093 rmmod nvme_tcp 00:41:59.093 rmmod nvme_fabrics 00:41:59.093 rmmod nvme_keyring 00:41:59.093 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:59.094 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:59.094 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65948 ']' 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65948 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65948 ']' 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65948 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65948 00:41:59.094 killing process with pid 65948 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65948' 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65948 00:41:59.094 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65948 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:59.352 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:41:59.611 00:41:59.611 real 0m19.617s 00:41:59.611 user 1m13.535s 00:41:59.611 sys 0m10.099s 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:59.611 ************************************ 00:41:59.611 END TEST nvmf_fio_target 00:41:59.611 ************************************ 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:59.611 ************************************ 00:41:59.611 START TEST nvmf_bdevio 00:41:59.611 ************************************ 00:41:59.611 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:41:59.871 * Looking for test storage... 00:41:59.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.871 --rc genhtml_branch_coverage=1 00:41:59.871 --rc genhtml_function_coverage=1 00:41:59.871 --rc genhtml_legend=1 00:41:59.871 --rc geninfo_all_blocks=1 00:41:59.871 --rc geninfo_unexecuted_blocks=1 00:41:59.871 00:41:59.871 ' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.871 --rc genhtml_branch_coverage=1 00:41:59.871 --rc genhtml_function_coverage=1 00:41:59.871 --rc genhtml_legend=1 00:41:59.871 --rc geninfo_all_blocks=1 00:41:59.871 --rc geninfo_unexecuted_blocks=1 00:41:59.871 00:41:59.871 ' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.871 --rc genhtml_branch_coverage=1 00:41:59.871 --rc genhtml_function_coverage=1 00:41:59.871 --rc genhtml_legend=1 00:41:59.871 --rc geninfo_all_blocks=1 00:41:59.871 --rc geninfo_unexecuted_blocks=1 00:41:59.871 00:41:59.871 ' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.871 --rc genhtml_branch_coverage=1 00:41:59.871 --rc genhtml_function_coverage=1 00:41:59.871 --rc genhtml_legend=1 00:41:59.871 --rc geninfo_all_blocks=1 00:41:59.871 --rc geninfo_unexecuted_blocks=1 00:41:59.871 00:41:59.871 ' 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:59.871 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:59.872 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:59.872 Cannot find device "nvmf_init_br" 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:59.872 Cannot find device "nvmf_init_br2" 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:59.872 Cannot find device "nvmf_tgt_br" 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:59.872 Cannot find device "nvmf_tgt_br2" 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:59.872 Cannot find device "nvmf_init_br" 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:59.872 Cannot find device "nvmf_init_br2" 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:41:59.872 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:00.130 Cannot find device "nvmf_tgt_br" 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:00.130 Cannot find device "nvmf_tgt_br2" 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:00.130 Cannot find device "nvmf_br" 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:00.130 Cannot find device "nvmf_init_if" 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:00.130 Cannot find device "nvmf_init_if2" 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:00.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:00.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:00.130 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:00.130 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:00.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:00.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:42:00.388 00:42:00.388 --- 10.0.0.3 ping statistics --- 00:42:00.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.388 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:00.388 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:00.388 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:42:00.388 00:42:00.388 --- 10.0.0.4 ping statistics --- 00:42:00.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.388 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:00.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:00.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:42:00.388 00:42:00.388 --- 10.0.0.1 ping statistics --- 00:42:00.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.388 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:00.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:00.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:42:00.388 00:42:00.388 --- 10.0.0.2 ping statistics --- 00:42:00.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.388 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66695 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66695 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66695 ']' 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:00.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:00.388 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.388 [2024-12-09 05:32:47.249946] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:42:00.388 [2024-12-09 05:32:47.250057] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:00.646 [2024-12-09 05:32:47.411271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:00.647 [2024-12-09 05:32:47.451928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:00.647 [2024-12-09 05:32:47.451986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:00.647 [2024-12-09 05:32:47.452000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:00.647 [2024-12-09 05:32:47.452010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:00.647 [2024-12-09 05:32:47.452019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:00.647 [2024-12-09 05:32:47.453211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:00.647 [2024-12-09 05:32:47.453306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:00.647 [2024-12-09 05:32:47.453415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:00.647 [2024-12-09 05:32:47.453422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:00.647 [2024-12-09 05:32:47.486806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.647 [2024-12-09 05:32:47.582731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.647 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.905 Malloc0 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.905 [2024-12-09 05:32:47.642149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:00.905 { 00:42:00.905 "params": { 00:42:00.905 "name": "Nvme$subsystem", 00:42:00.905 "trtype": "$TEST_TRANSPORT", 00:42:00.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:00.905 "adrfam": "ipv4", 00:42:00.905 "trsvcid": "$NVMF_PORT", 00:42:00.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:00.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:00.905 "hdgst": ${hdgst:-false}, 00:42:00.905 "ddgst": ${ddgst:-false} 00:42:00.905 }, 00:42:00.905 "method": "bdev_nvme_attach_controller" 00:42:00.905 } 00:42:00.905 EOF 00:42:00.905 )") 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:00.905 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:00.905 "params": { 00:42:00.905 "name": "Nvme1", 00:42:00.905 "trtype": "tcp", 00:42:00.905 "traddr": "10.0.0.3", 00:42:00.905 "adrfam": "ipv4", 00:42:00.905 "trsvcid": "4420", 00:42:00.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:00.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:00.905 "hdgst": false, 00:42:00.905 "ddgst": false 00:42:00.905 }, 00:42:00.905 "method": "bdev_nvme_attach_controller" 00:42:00.905 }' 00:42:00.905 [2024-12-09 05:32:47.702273] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:42:00.905 [2024-12-09 05:32:47.702379] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66729 ] 00:42:00.905 [2024-12-09 05:32:47.856170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:01.164 [2024-12-09 05:32:47.892108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.164 [2024-12-09 05:32:47.892209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:01.164 [2024-12-09 05:32:47.892216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:01.164 [2024-12-09 05:32:47.931110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:01.164 I/O targets: 00:42:01.164 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:01.164 00:42:01.164 00:42:01.164 CUnit - A unit testing framework for C - Version 2.1-3 00:42:01.164 http://cunit.sourceforge.net/ 00:42:01.164 00:42:01.164 00:42:01.164 Suite: bdevio tests on: Nvme1n1 00:42:01.164 Test: blockdev write read block ...passed 00:42:01.164 Test: blockdev write zeroes read block ...passed 00:42:01.164 Test: blockdev write zeroes read no split ...passed 00:42:01.164 Test: blockdev write zeroes read split ...passed 00:42:01.164 Test: blockdev write zeroes read split partial ...passed 00:42:01.164 Test: blockdev reset ...[2024-12-09 05:32:48.060157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:01.164 [2024-12-09 05:32:48.060520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137b190 (9): Bad file descriptor 00:42:01.164 [2024-12-09 05:32:48.078396] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:01.164 passed 00:42:01.164 Test: blockdev write read 8 blocks ...passed 00:42:01.164 Test: blockdev write read size > 128k ...passed 00:42:01.164 Test: blockdev write read invalid size ...passed 00:42:01.164 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:01.164 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:01.164 Test: blockdev write read max offset ...passed 00:42:01.164 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:01.164 Test: blockdev writev readv 8 blocks ...passed 00:42:01.164 Test: blockdev writev readv 30 x 1block ...passed 00:42:01.164 Test: blockdev writev readv block ...passed 00:42:01.164 Test: blockdev writev readv size > 128k ...passed 00:42:01.164 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:01.164 Test: blockdev comparev and writev ...[2024-12-09 05:32:48.086012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.086057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.086095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.086392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.086418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.086440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.086452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.086760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.086786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.086807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.086820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.087107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.087135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.087156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:01.165 [2024-12-09 05:32:48.087168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:01.165 passed 00:42:01.165 Test: blockdev nvme passthru rw ...passed 00:42:01.165 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:32:48.088101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:01.165 [2024-12-09 05:32:48.088141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.088271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:01.165 [2024-12-09 05:32:48.088306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:01.165 [2024-12-09 05:32:48.088422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:01.165 [2024-12-09 05:32:48.088447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:01.165 passed 00:42:01.165 Test: blockdev nvme admin passthru ...[2024-12-09 05:32:48.088578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:01.165 [2024-12-09 05:32:48.088602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:01.165 passed 00:42:01.165 Test: blockdev copy ...passed 00:42:01.165 00:42:01.165 Run Summary: Type Total Ran Passed Failed Inactive 00:42:01.165 suites 1 1 n/a 0 0 00:42:01.165 tests 23 23 23 0 0 00:42:01.165 asserts 152 152 152 0 n/a 00:42:01.165 00:42:01.165 Elapsed time = 0.142 seconds 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:01.423 rmmod nvme_tcp 00:42:01.423 rmmod nvme_fabrics 00:42:01.423 rmmod nvme_keyring 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66695 ']' 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66695 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66695 ']' 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66695 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:01.423 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66695 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66695' 00:42:01.681 killing process with pid 66695 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66695 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66695 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:01.681 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:42:01.939 00:42:01.939 real 0m2.339s 00:42:01.939 user 0m5.729s 00:42:01.939 sys 0m0.771s 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:01.939 05:32:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:01.939 ************************************ 00:42:01.939 END TEST nvmf_bdevio 00:42:01.939 ************************************ 00:42:02.198 05:32:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:02.198 ************************************ 00:42:02.198 END TEST nvmf_target_core 00:42:02.198 ************************************ 00:42:02.198 00:42:02.198 real 2m31.173s 00:42:02.198 user 6m35.463s 00:42:02.198 sys 0m51.137s 00:42:02.198 05:32:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.198 05:32:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:02.198 05:32:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:42:02.198 05:32:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:02.198 05:32:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.198 05:32:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:02.198 ************************************ 00:42:02.198 START TEST nvmf_target_extra 00:42:02.198 ************************************ 00:42:02.198 05:32:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:42:02.198 * Looking for test storage... 00:42:02.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:02.198 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.199 --rc genhtml_branch_coverage=1 00:42:02.199 --rc genhtml_function_coverage=1 00:42:02.199 --rc genhtml_legend=1 00:42:02.199 --rc geninfo_all_blocks=1 00:42:02.199 --rc geninfo_unexecuted_blocks=1 00:42:02.199 00:42:02.199 ' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.199 --rc genhtml_branch_coverage=1 00:42:02.199 --rc genhtml_function_coverage=1 00:42:02.199 --rc genhtml_legend=1 00:42:02.199 --rc geninfo_all_blocks=1 00:42:02.199 --rc geninfo_unexecuted_blocks=1 00:42:02.199 00:42:02.199 ' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.199 --rc genhtml_branch_coverage=1 00:42:02.199 --rc genhtml_function_coverage=1 00:42:02.199 --rc genhtml_legend=1 00:42:02.199 --rc geninfo_all_blocks=1 00:42:02.199 --rc geninfo_unexecuted_blocks=1 00:42:02.199 00:42:02.199 ' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.199 --rc genhtml_branch_coverage=1 00:42:02.199 --rc genhtml_function_coverage=1 00:42:02.199 --rc genhtml_legend=1 00:42:02.199 --rc geninfo_all_blocks=1 00:42:02.199 --rc geninfo_unexecuted_blocks=1 00:42:02.199 00:42:02.199 ' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:02.199 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:42:02.199 ************************************ 00:42:02.199 START TEST nvmf_auth_target 00:42:02.199 ************************************ 00:42:02.199 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:42:02.465 * Looking for test storage... 00:42:02.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:02.466 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:02.466 Cannot find device "nvmf_init_br" 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:02.466 Cannot find device "nvmf_init_br2" 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:02.466 Cannot find device "nvmf_tgt_br" 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:42:02.466 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:02.738 Cannot find device "nvmf_tgt_br2" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:02.738 Cannot find device "nvmf_init_br" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:02.738 Cannot find device "nvmf_init_br2" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:02.738 Cannot find device "nvmf_tgt_br" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:02.738 Cannot find device "nvmf_tgt_br2" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:02.738 Cannot find device "nvmf_br" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:02.738 Cannot find device "nvmf_init_if" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:02.738 Cannot find device "nvmf_init_if2" 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:02.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:02.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:02.738 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:02.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:02.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:42:02.997 00:42:02.997 --- 10.0.0.3 ping statistics --- 00:42:02.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.997 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:02.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:02.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:42:02.997 00:42:02.997 --- 10.0.0.4 ping statistics --- 00:42:02.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.997 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:02.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:02.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:42:02.997 00:42:02.997 --- 10.0.0.1 ping statistics --- 00:42:02.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.997 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:02.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:02.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:42:02.997 00:42:02.997 --- 10.0.0.2 ping statistics --- 00:42:02.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.997 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67012 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67012 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67012 ']' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:02.997 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67031 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:42:03.255 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:42:03.256 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d4605519fbac7b62164f0b6b3f708dd3c1b976f49b62efe5 00:42:03.256 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.a3x 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d4605519fbac7b62164f0b6b3f708dd3c1b976f49b62efe5 0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d4605519fbac7b62164f0b6b3f708dd3c1b976f49b62efe5 0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d4605519fbac7b62164f0b6b3f708dd3c1b976f49b62efe5 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.a3x 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.a3x 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.a3x 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cca01bdd89d86da02948f50323d4fd1e87b5a2522cb282c9d75fe27f637e293d 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5E0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cca01bdd89d86da02948f50323d4fd1e87b5a2522cb282c9d75fe27f637e293d 3 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cca01bdd89d86da02948f50323d4fd1e87b5a2522cb282c9d75fe27f637e293d 3 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cca01bdd89d86da02948f50323d4fd1e87b5a2522cb282c9d75fe27f637e293d 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5E0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5E0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.5E0 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ed928724bfa3fa840a86dfc75cfa9226 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iZx 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ed928724bfa3fa840a86dfc75cfa9226 1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ed928724bfa3fa840a86dfc75cfa9226 1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ed928724bfa3fa840a86dfc75cfa9226 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iZx 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iZx 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.iZx 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d866e630ba15b430b51c61c622abb376be48a5aa4dbb0542 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MEF 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d866e630ba15b430b51c61c622abb376be48a5aa4dbb0542 2 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d866e630ba15b430b51c61c622abb376be48a5aa4dbb0542 2 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d866e630ba15b430b51c61c622abb376be48a5aa4dbb0542 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MEF 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MEF 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.MEF 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:42:03.516 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9a0b2363bebe7c89f45f631cd8e7c55b7392940d0e09efe4 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.96c 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9a0b2363bebe7c89f45f631cd8e7c55b7392940d0e09efe4 2 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9a0b2363bebe7c89f45f631cd8e7c55b7392940d0e09efe4 2 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9a0b2363bebe7c89f45f631cd8e7c55b7392940d0e09efe4 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.96c 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.96c 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.96c 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9c87fd1a3cc4df91efe737215b12edfd 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ctc 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9c87fd1a3cc4df91efe737215b12edfd 1 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9c87fd1a3cc4df91efe737215b12edfd 1 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9c87fd1a3cc4df91efe737215b12edfd 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ctc 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ctc 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ctc 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a37582dd5e626cca9f45ef9d26cc18a55e5af25efb6338bf8fa212ef22c96a15 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.D7Y 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a37582dd5e626cca9f45ef9d26cc18a55e5af25efb6338bf8fa212ef22c96a15 3 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a37582dd5e626cca9f45ef9d26cc18a55e5af25efb6338bf8fa212ef22c96a15 3 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a37582dd5e626cca9f45ef9d26cc18a55e5af25efb6338bf8fa212ef22c96a15 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.D7Y 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.D7Y 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.D7Y 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67012 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67012 ']' 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:03.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:03.776 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67031 /var/tmp/host.sock 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67031 ']' 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:04.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:42:04.345 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:42:04.346 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:04.346 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.a3x 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.a3x 00:42:04.605 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.a3x 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.5E0 ]] 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5E0 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5E0 00:42:04.864 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5E0 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iZx 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.iZx 00:42:05.122 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.iZx 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.MEF ]] 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MEF 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MEF 00:42:05.380 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MEF 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.96c 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.96c 00:42:05.639 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.96c 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ctc ]] 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ctc 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ctc 00:42:05.897 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ctc 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.D7Y 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.D7Y 00:42:06.156 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.D7Y 00:42:06.415 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:42:06.415 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:42:06.415 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:06.415 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:06.415 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:06.415 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:06.674 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:06.675 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:06.934 00:42:07.200 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:07.200 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:07.200 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:07.458 { 00:42:07.458 "cntlid": 1, 00:42:07.458 "qid": 0, 00:42:07.458 "state": "enabled", 00:42:07.458 "thread": "nvmf_tgt_poll_group_000", 00:42:07.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:07.458 "listen_address": { 00:42:07.458 "trtype": "TCP", 00:42:07.458 "adrfam": "IPv4", 00:42:07.458 "traddr": "10.0.0.3", 00:42:07.458 "trsvcid": "4420" 00:42:07.458 }, 00:42:07.458 "peer_address": { 00:42:07.458 "trtype": "TCP", 00:42:07.458 "adrfam": "IPv4", 00:42:07.458 "traddr": "10.0.0.1", 00:42:07.458 "trsvcid": "54860" 00:42:07.458 }, 00:42:07.458 "auth": { 00:42:07.458 "state": "completed", 00:42:07.458 "digest": "sha256", 00:42:07.458 "dhgroup": "null" 00:42:07.458 } 00:42:07.458 } 00:42:07.458 ]' 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:07.458 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:08.026 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:08.026 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:12.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:12.207 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:12.772 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:13.028 00:42:13.028 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:13.028 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:13.028 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:13.285 { 00:42:13.285 "cntlid": 3, 00:42:13.285 "qid": 0, 00:42:13.285 "state": "enabled", 00:42:13.285 "thread": "nvmf_tgt_poll_group_000", 00:42:13.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:13.285 "listen_address": { 00:42:13.285 "trtype": "TCP", 00:42:13.285 "adrfam": "IPv4", 00:42:13.285 "traddr": "10.0.0.3", 00:42:13.285 "trsvcid": "4420" 00:42:13.285 }, 00:42:13.285 "peer_address": { 00:42:13.285 "trtype": "TCP", 00:42:13.285 "adrfam": "IPv4", 00:42:13.285 "traddr": "10.0.0.1", 00:42:13.285 "trsvcid": "54884" 00:42:13.285 }, 00:42:13.285 "auth": { 00:42:13.285 "state": "completed", 00:42:13.285 "digest": "sha256", 00:42:13.285 "dhgroup": "null" 00:42:13.285 } 00:42:13.285 } 00:42:13.285 ]' 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:13.285 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:13.543 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:13.543 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:14.479 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:14.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:14.479 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:14.479 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.479 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:14.480 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.480 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:14.480 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:14.480 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:14.738 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:14.999 00:42:14.999 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:14.999 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:14.999 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:15.258 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.258 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:15.258 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.258 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:15.517 { 00:42:15.517 "cntlid": 5, 00:42:15.517 "qid": 0, 00:42:15.517 "state": "enabled", 00:42:15.517 "thread": "nvmf_tgt_poll_group_000", 00:42:15.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:15.517 "listen_address": { 00:42:15.517 "trtype": "TCP", 00:42:15.517 "adrfam": "IPv4", 00:42:15.517 "traddr": "10.0.0.3", 00:42:15.517 "trsvcid": "4420" 00:42:15.517 }, 00:42:15.517 "peer_address": { 00:42:15.517 "trtype": "TCP", 00:42:15.517 "adrfam": "IPv4", 00:42:15.517 "traddr": "10.0.0.1", 00:42:15.517 "trsvcid": "50214" 00:42:15.517 }, 00:42:15.517 "auth": { 00:42:15.517 "state": "completed", 00:42:15.517 "digest": "sha256", 00:42:15.517 "dhgroup": "null" 00:42:15.517 } 00:42:15.517 } 00:42:15.517 ]' 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:15.517 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:15.777 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:15.777 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:16.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:16.713 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:16.988 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:17.247 00:42:17.247 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:17.247 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:17.247 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:17.505 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.505 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:17.505 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.505 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:17.505 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.506 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:17.506 { 00:42:17.506 "cntlid": 7, 00:42:17.506 "qid": 0, 00:42:17.506 "state": "enabled", 00:42:17.506 "thread": "nvmf_tgt_poll_group_000", 00:42:17.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:17.506 "listen_address": { 00:42:17.506 "trtype": "TCP", 00:42:17.506 "adrfam": "IPv4", 00:42:17.506 "traddr": "10.0.0.3", 00:42:17.506 "trsvcid": "4420" 00:42:17.506 }, 00:42:17.506 "peer_address": { 00:42:17.506 "trtype": "TCP", 00:42:17.506 "adrfam": "IPv4", 00:42:17.506 "traddr": "10.0.0.1", 00:42:17.506 "trsvcid": "50258" 00:42:17.506 }, 00:42:17.506 "auth": { 00:42:17.506 "state": "completed", 00:42:17.506 "digest": "sha256", 00:42:17.506 "dhgroup": "null" 00:42:17.506 } 00:42:17.506 } 00:42:17.506 ]' 00:42:17.506 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:17.506 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:17.506 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:17.765 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:17.765 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:17.765 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:17.765 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:17.765 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:18.025 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:18.025 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:18.972 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:18.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:18.973 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:19.231 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:19.490 00:42:19.490 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:19.490 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:19.490 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:19.747 { 00:42:19.747 "cntlid": 9, 00:42:19.747 "qid": 0, 00:42:19.747 "state": "enabled", 00:42:19.747 "thread": "nvmf_tgt_poll_group_000", 00:42:19.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:19.747 "listen_address": { 00:42:19.747 "trtype": "TCP", 00:42:19.747 "adrfam": "IPv4", 00:42:19.747 "traddr": "10.0.0.3", 00:42:19.747 "trsvcid": "4420" 00:42:19.747 }, 00:42:19.747 "peer_address": { 00:42:19.747 "trtype": "TCP", 00:42:19.747 "adrfam": "IPv4", 00:42:19.747 "traddr": "10.0.0.1", 00:42:19.747 "trsvcid": "50270" 00:42:19.747 }, 00:42:19.747 "auth": { 00:42:19.747 "state": "completed", 00:42:19.747 "digest": "sha256", 00:42:19.747 "dhgroup": "ffdhe2048" 00:42:19.747 } 00:42:19.747 } 00:42:19.747 ]' 00:42:19.747 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:20.004 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:20.262 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:20.262 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:20.827 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:21.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:21.086 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:21.344 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:21.910 00:42:21.910 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:21.910 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:21.910 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:22.167 { 00:42:22.167 "cntlid": 11, 00:42:22.167 "qid": 0, 00:42:22.167 "state": "enabled", 00:42:22.167 "thread": "nvmf_tgt_poll_group_000", 00:42:22.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:22.167 "listen_address": { 00:42:22.167 "trtype": "TCP", 00:42:22.167 "adrfam": "IPv4", 00:42:22.167 "traddr": "10.0.0.3", 00:42:22.167 "trsvcid": "4420" 00:42:22.167 }, 00:42:22.167 "peer_address": { 00:42:22.167 "trtype": "TCP", 00:42:22.167 "adrfam": "IPv4", 00:42:22.167 "traddr": "10.0.0.1", 00:42:22.167 "trsvcid": "50296" 00:42:22.167 }, 00:42:22.167 "auth": { 00:42:22.167 "state": "completed", 00:42:22.167 "digest": "sha256", 00:42:22.167 "dhgroup": "ffdhe2048" 00:42:22.167 } 00:42:22.167 } 00:42:22.167 ]' 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:22.167 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:22.167 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:22.167 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:22.167 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:22.167 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:22.167 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:22.732 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:22.732 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:23.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:23.296 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:23.553 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:23.810 00:42:23.810 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:23.810 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:23.810 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:24.068 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.068 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:24.069 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.069 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:24.069 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:24.327 { 00:42:24.327 "cntlid": 13, 00:42:24.327 "qid": 0, 00:42:24.327 "state": "enabled", 00:42:24.327 "thread": "nvmf_tgt_poll_group_000", 00:42:24.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:24.327 "listen_address": { 00:42:24.327 "trtype": "TCP", 00:42:24.327 "adrfam": "IPv4", 00:42:24.327 "traddr": "10.0.0.3", 00:42:24.327 "trsvcid": "4420" 00:42:24.327 }, 00:42:24.327 "peer_address": { 00:42:24.327 "trtype": "TCP", 00:42:24.327 "adrfam": "IPv4", 00:42:24.327 "traddr": "10.0.0.1", 00:42:24.327 "trsvcid": "50318" 00:42:24.327 }, 00:42:24.327 "auth": { 00:42:24.327 "state": "completed", 00:42:24.327 "digest": "sha256", 00:42:24.327 "dhgroup": "ffdhe2048" 00:42:24.327 } 00:42:24.327 } 00:42:24.327 ]' 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:24.327 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:24.585 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:24.585 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:25.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:25.571 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:25.829 05:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:26.089 00:42:26.089 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:26.089 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:26.089 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:26.348 { 00:42:26.348 "cntlid": 15, 00:42:26.348 "qid": 0, 00:42:26.348 "state": "enabled", 00:42:26.348 "thread": "nvmf_tgt_poll_group_000", 00:42:26.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:26.348 "listen_address": { 00:42:26.348 "trtype": "TCP", 00:42:26.348 "adrfam": "IPv4", 00:42:26.348 "traddr": "10.0.0.3", 00:42:26.348 "trsvcid": "4420" 00:42:26.348 }, 00:42:26.348 "peer_address": { 00:42:26.348 "trtype": "TCP", 00:42:26.348 "adrfam": "IPv4", 00:42:26.348 "traddr": "10.0.0.1", 00:42:26.348 "trsvcid": "39506" 00:42:26.348 }, 00:42:26.348 "auth": { 00:42:26.348 "state": "completed", 00:42:26.348 "digest": "sha256", 00:42:26.348 "dhgroup": "ffdhe2048" 00:42:26.348 } 00:42:26.348 } 00:42:26.348 ]' 00:42:26.348 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:26.606 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:26.865 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:26.865 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:27.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:27.799 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:28.066 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:28.325 00:42:28.325 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:28.325 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:28.325 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:28.583 { 00:42:28.583 "cntlid": 17, 00:42:28.583 "qid": 0, 00:42:28.583 "state": "enabled", 00:42:28.583 "thread": "nvmf_tgt_poll_group_000", 00:42:28.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:28.583 "listen_address": { 00:42:28.583 "trtype": "TCP", 00:42:28.583 "adrfam": "IPv4", 00:42:28.583 "traddr": "10.0.0.3", 00:42:28.583 "trsvcid": "4420" 00:42:28.583 }, 00:42:28.583 "peer_address": { 00:42:28.583 "trtype": "TCP", 00:42:28.583 "adrfam": "IPv4", 00:42:28.583 "traddr": "10.0.0.1", 00:42:28.583 "trsvcid": "39540" 00:42:28.583 }, 00:42:28.583 "auth": { 00:42:28.583 "state": "completed", 00:42:28.583 "digest": "sha256", 00:42:28.583 "dhgroup": "ffdhe3072" 00:42:28.583 } 00:42:28.583 } 00:42:28.583 ]' 00:42:28.583 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:28.841 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:29.099 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:29.099 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:30.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:30.034 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:30.292 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:30.551 00:42:30.551 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:30.551 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:30.551 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:30.810 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:30.810 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:30.810 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.810 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:31.067 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.067 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:31.068 { 00:42:31.068 "cntlid": 19, 00:42:31.068 "qid": 0, 00:42:31.068 "state": "enabled", 00:42:31.068 "thread": "nvmf_tgt_poll_group_000", 00:42:31.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:31.068 "listen_address": { 00:42:31.068 "trtype": "TCP", 00:42:31.068 "adrfam": "IPv4", 00:42:31.068 "traddr": "10.0.0.3", 00:42:31.068 "trsvcid": "4420" 00:42:31.068 }, 00:42:31.068 "peer_address": { 00:42:31.068 "trtype": "TCP", 00:42:31.068 "adrfam": "IPv4", 00:42:31.068 "traddr": "10.0.0.1", 00:42:31.068 "trsvcid": "39560" 00:42:31.068 }, 00:42:31.068 "auth": { 00:42:31.068 "state": "completed", 00:42:31.068 "digest": "sha256", 00:42:31.068 "dhgroup": "ffdhe3072" 00:42:31.068 } 00:42:31.068 } 00:42:31.068 ]' 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:31.068 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:31.325 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:31.325 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:32.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:32.259 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:32.259 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:32.823 00:42:32.823 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:32.823 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:32.823 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:33.080 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:33.080 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:33.080 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.080 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:33.080 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.080 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:33.080 { 00:42:33.080 "cntlid": 21, 00:42:33.080 "qid": 0, 00:42:33.080 "state": "enabled", 00:42:33.080 "thread": "nvmf_tgt_poll_group_000", 00:42:33.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:33.081 "listen_address": { 00:42:33.081 "trtype": "TCP", 00:42:33.081 "adrfam": "IPv4", 00:42:33.081 "traddr": "10.0.0.3", 00:42:33.081 "trsvcid": "4420" 00:42:33.081 }, 00:42:33.081 "peer_address": { 00:42:33.081 "trtype": "TCP", 00:42:33.081 "adrfam": "IPv4", 00:42:33.081 "traddr": "10.0.0.1", 00:42:33.081 "trsvcid": "39586" 00:42:33.081 }, 00:42:33.081 "auth": { 00:42:33.081 "state": "completed", 00:42:33.081 "digest": "sha256", 00:42:33.081 "dhgroup": "ffdhe3072" 00:42:33.081 } 00:42:33.081 } 00:42:33.081 ]' 00:42:33.081 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:33.081 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:33.081 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:33.081 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:33.081 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:33.081 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:33.081 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:33.081 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:33.338 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:33.338 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:34.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:34.269 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:34.269 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:34.832 00:42:34.832 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:34.832 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:34.832 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:35.091 { 00:42:35.091 "cntlid": 23, 00:42:35.091 "qid": 0, 00:42:35.091 "state": "enabled", 00:42:35.091 "thread": "nvmf_tgt_poll_group_000", 00:42:35.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:35.091 "listen_address": { 00:42:35.091 "trtype": "TCP", 00:42:35.091 "adrfam": "IPv4", 00:42:35.091 "traddr": "10.0.0.3", 00:42:35.091 "trsvcid": "4420" 00:42:35.091 }, 00:42:35.091 "peer_address": { 00:42:35.091 "trtype": "TCP", 00:42:35.091 "adrfam": "IPv4", 00:42:35.091 "traddr": "10.0.0.1", 00:42:35.091 "trsvcid": "35230" 00:42:35.091 }, 00:42:35.091 "auth": { 00:42:35.091 "state": "completed", 00:42:35.091 "digest": "sha256", 00:42:35.091 "dhgroup": "ffdhe3072" 00:42:35.091 } 00:42:35.091 } 00:42:35.091 ]' 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:35.091 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:35.091 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:35.091 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:35.091 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:35.091 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:35.091 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:35.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:35.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:36.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:36.224 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.483 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.741 00:42:36.741 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:36.741 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:36.741 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:37.001 { 00:42:37.001 "cntlid": 25, 00:42:37.001 "qid": 0, 00:42:37.001 "state": "enabled", 00:42:37.001 "thread": "nvmf_tgt_poll_group_000", 00:42:37.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:37.001 "listen_address": { 00:42:37.001 "trtype": "TCP", 00:42:37.001 "adrfam": "IPv4", 00:42:37.001 "traddr": "10.0.0.3", 00:42:37.001 "trsvcid": "4420" 00:42:37.001 }, 00:42:37.001 "peer_address": { 00:42:37.001 "trtype": "TCP", 00:42:37.001 "adrfam": "IPv4", 00:42:37.001 "traddr": "10.0.0.1", 00:42:37.001 "trsvcid": "35260" 00:42:37.001 }, 00:42:37.001 "auth": { 00:42:37.001 "state": "completed", 00:42:37.001 "digest": "sha256", 00:42:37.001 "dhgroup": "ffdhe4096" 00:42:37.001 } 00:42:37.001 } 00:42:37.001 ]' 00:42:37.001 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:37.260 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:37.260 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:37.260 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:37.260 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:37.260 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:37.260 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:37.260 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:37.519 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:37.519 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:38.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:38.087 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:38.651 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:38.909 00:42:38.909 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:38.909 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:38.909 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:39.168 { 00:42:39.168 "cntlid": 27, 00:42:39.168 "qid": 0, 00:42:39.168 "state": "enabled", 00:42:39.168 "thread": "nvmf_tgt_poll_group_000", 00:42:39.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:39.168 "listen_address": { 00:42:39.168 "trtype": "TCP", 00:42:39.168 "adrfam": "IPv4", 00:42:39.168 "traddr": "10.0.0.3", 00:42:39.168 "trsvcid": "4420" 00:42:39.168 }, 00:42:39.168 "peer_address": { 00:42:39.168 "trtype": "TCP", 00:42:39.168 "adrfam": "IPv4", 00:42:39.168 "traddr": "10.0.0.1", 00:42:39.168 "trsvcid": "35292" 00:42:39.168 }, 00:42:39.168 "auth": { 00:42:39.168 "state": "completed", 00:42:39.168 "digest": "sha256", 00:42:39.168 "dhgroup": "ffdhe4096" 00:42:39.168 } 00:42:39.168 } 00:42:39.168 ]' 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:39.168 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:39.426 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:39.426 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:39.426 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:39.426 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:39.426 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:39.683 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:39.683 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:40.248 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:40.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:40.248 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:40.248 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.248 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.505 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.505 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:40.505 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:40.505 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:40.763 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:41.020 00:42:41.021 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:41.021 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:41.021 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:41.277 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:41.277 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:41.277 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.277 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:41.277 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.277 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:41.277 { 00:42:41.277 "cntlid": 29, 00:42:41.277 "qid": 0, 00:42:41.277 "state": "enabled", 00:42:41.277 "thread": "nvmf_tgt_poll_group_000", 00:42:41.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:41.277 "listen_address": { 00:42:41.277 "trtype": "TCP", 00:42:41.277 "adrfam": "IPv4", 00:42:41.277 "traddr": "10.0.0.3", 00:42:41.277 "trsvcid": "4420" 00:42:41.277 }, 00:42:41.277 "peer_address": { 00:42:41.278 "trtype": "TCP", 00:42:41.278 "adrfam": "IPv4", 00:42:41.278 "traddr": "10.0.0.1", 00:42:41.278 "trsvcid": "35310" 00:42:41.278 }, 00:42:41.278 "auth": { 00:42:41.278 "state": "completed", 00:42:41.278 "digest": "sha256", 00:42:41.278 "dhgroup": "ffdhe4096" 00:42:41.278 } 00:42:41.278 } 00:42:41.278 ]' 00:42:41.278 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:41.535 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:41.791 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:41.791 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:42.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:42.356 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.614 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:42.872 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.872 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:42.872 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:42.872 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:43.150 00:42:43.150 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:43.150 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:43.150 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:43.409 { 00:42:43.409 "cntlid": 31, 00:42:43.409 "qid": 0, 00:42:43.409 "state": "enabled", 00:42:43.409 "thread": "nvmf_tgt_poll_group_000", 00:42:43.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:43.409 "listen_address": { 00:42:43.409 "trtype": "TCP", 00:42:43.409 "adrfam": "IPv4", 00:42:43.409 "traddr": "10.0.0.3", 00:42:43.409 "trsvcid": "4420" 00:42:43.409 }, 00:42:43.409 "peer_address": { 00:42:43.409 "trtype": "TCP", 00:42:43.409 "adrfam": "IPv4", 00:42:43.409 "traddr": "10.0.0.1", 00:42:43.409 "trsvcid": "35336" 00:42:43.409 }, 00:42:43.409 "auth": { 00:42:43.409 "state": "completed", 00:42:43.409 "digest": "sha256", 00:42:43.409 "dhgroup": "ffdhe4096" 00:42:43.409 } 00:42:43.409 } 00:42:43.409 ]' 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:43.409 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:43.980 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:43.980 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:44.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:44.555 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:44.813 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:45.380 00:42:45.380 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:45.380 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:45.380 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:45.638 { 00:42:45.638 "cntlid": 33, 00:42:45.638 "qid": 0, 00:42:45.638 "state": "enabled", 00:42:45.638 "thread": "nvmf_tgt_poll_group_000", 00:42:45.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:45.638 "listen_address": { 00:42:45.638 "trtype": "TCP", 00:42:45.638 "adrfam": "IPv4", 00:42:45.638 "traddr": "10.0.0.3", 00:42:45.638 "trsvcid": "4420" 00:42:45.638 }, 00:42:45.638 "peer_address": { 00:42:45.638 "trtype": "TCP", 00:42:45.638 "adrfam": "IPv4", 00:42:45.638 "traddr": "10.0.0.1", 00:42:45.638 "trsvcid": "34988" 00:42:45.638 }, 00:42:45.638 "auth": { 00:42:45.638 "state": "completed", 00:42:45.638 "digest": "sha256", 00:42:45.638 "dhgroup": "ffdhe6144" 00:42:45.638 } 00:42:45.638 } 00:42:45.638 ]' 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:45.638 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:45.895 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:45.895 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:45.895 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:46.152 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:46.152 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:46.717 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:46.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:46.717 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:46.717 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.717 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:46.975 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.975 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:46.975 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:46.975 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:47.233 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:42:47.233 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:47.233 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:47.234 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:47.493 00:42:47.752 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:47.752 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:47.752 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:48.011 { 00:42:48.011 "cntlid": 35, 00:42:48.011 "qid": 0, 00:42:48.011 "state": "enabled", 00:42:48.011 "thread": "nvmf_tgt_poll_group_000", 00:42:48.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:48.011 "listen_address": { 00:42:48.011 "trtype": "TCP", 00:42:48.011 "adrfam": "IPv4", 00:42:48.011 "traddr": "10.0.0.3", 00:42:48.011 "trsvcid": "4420" 00:42:48.011 }, 00:42:48.011 "peer_address": { 00:42:48.011 "trtype": "TCP", 00:42:48.011 "adrfam": "IPv4", 00:42:48.011 "traddr": "10.0.0.1", 00:42:48.011 "trsvcid": "35010" 00:42:48.011 }, 00:42:48.011 "auth": { 00:42:48.011 "state": "completed", 00:42:48.011 "digest": "sha256", 00:42:48.011 "dhgroup": "ffdhe6144" 00:42:48.011 } 00:42:48.011 } 00:42:48.011 ]' 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:48.011 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:48.274 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:48.274 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:49.211 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:49.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:49.212 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:49.212 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:49.780 00:42:49.780 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:49.780 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:49.780 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:50.039 { 00:42:50.039 "cntlid": 37, 00:42:50.039 "qid": 0, 00:42:50.039 "state": "enabled", 00:42:50.039 "thread": "nvmf_tgt_poll_group_000", 00:42:50.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:50.039 "listen_address": { 00:42:50.039 "trtype": "TCP", 00:42:50.039 "adrfam": "IPv4", 00:42:50.039 "traddr": "10.0.0.3", 00:42:50.039 "trsvcid": "4420" 00:42:50.039 }, 00:42:50.039 "peer_address": { 00:42:50.039 "trtype": "TCP", 00:42:50.039 "adrfam": "IPv4", 00:42:50.039 "traddr": "10.0.0.1", 00:42:50.039 "trsvcid": "35036" 00:42:50.039 }, 00:42:50.039 "auth": { 00:42:50.039 "state": "completed", 00:42:50.039 "digest": "sha256", 00:42:50.039 "dhgroup": "ffdhe6144" 00:42:50.039 } 00:42:50.039 } 00:42:50.039 ]' 00:42:50.039 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:50.297 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:50.297 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:50.297 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:50.297 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:50.298 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:50.298 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:50.298 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:50.557 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:50.557 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:51.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:51.493 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:52.061 00:42:52.061 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:52.061 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:52.061 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:52.340 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:52.340 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:52.341 { 00:42:52.341 "cntlid": 39, 00:42:52.341 "qid": 0, 00:42:52.341 "state": "enabled", 00:42:52.341 "thread": "nvmf_tgt_poll_group_000", 00:42:52.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:52.341 "listen_address": { 00:42:52.341 "trtype": "TCP", 00:42:52.341 "adrfam": "IPv4", 00:42:52.341 "traddr": "10.0.0.3", 00:42:52.341 "trsvcid": "4420" 00:42:52.341 }, 00:42:52.341 "peer_address": { 00:42:52.341 "trtype": "TCP", 00:42:52.341 "adrfam": "IPv4", 00:42:52.341 "traddr": "10.0.0.1", 00:42:52.341 "trsvcid": "35072" 00:42:52.341 }, 00:42:52.341 "auth": { 00:42:52.341 "state": "completed", 00:42:52.341 "digest": "sha256", 00:42:52.341 "dhgroup": "ffdhe6144" 00:42:52.341 } 00:42:52.341 } 00:42:52.341 ]' 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:52.341 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:52.598 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:52.598 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:52.598 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:52.598 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:52.598 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:52.857 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:52.857 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:53.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:53.422 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.680 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:53.938 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.938 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:53.938 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:53.938 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:54.504 00:42:54.504 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:54.504 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:54.504 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:54.761 { 00:42:54.761 "cntlid": 41, 00:42:54.761 "qid": 0, 00:42:54.761 "state": "enabled", 00:42:54.761 "thread": "nvmf_tgt_poll_group_000", 00:42:54.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:54.761 "listen_address": { 00:42:54.761 "trtype": "TCP", 00:42:54.761 "adrfam": "IPv4", 00:42:54.761 "traddr": "10.0.0.3", 00:42:54.761 "trsvcid": "4420" 00:42:54.761 }, 00:42:54.761 "peer_address": { 00:42:54.761 "trtype": "TCP", 00:42:54.761 "adrfam": "IPv4", 00:42:54.761 "traddr": "10.0.0.1", 00:42:54.761 "trsvcid": "35098" 00:42:54.761 }, 00:42:54.761 "auth": { 00:42:54.761 "state": "completed", 00:42:54.761 "digest": "sha256", 00:42:54.761 "dhgroup": "ffdhe8192" 00:42:54.761 } 00:42:54.761 } 00:42:54.761 ]' 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:54.761 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:55.018 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:55.018 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:55.018 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:55.275 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:55.275 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:55.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:55.841 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:56.099 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:57.036 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:57.037 { 00:42:57.037 "cntlid": 43, 00:42:57.037 "qid": 0, 00:42:57.037 "state": "enabled", 00:42:57.037 "thread": "nvmf_tgt_poll_group_000", 00:42:57.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:57.037 "listen_address": { 00:42:57.037 "trtype": "TCP", 00:42:57.037 "adrfam": "IPv4", 00:42:57.037 "traddr": "10.0.0.3", 00:42:57.037 "trsvcid": "4420" 00:42:57.037 }, 00:42:57.037 "peer_address": { 00:42:57.037 "trtype": "TCP", 00:42:57.037 "adrfam": "IPv4", 00:42:57.037 "traddr": "10.0.0.1", 00:42:57.037 "trsvcid": "49750" 00:42:57.037 }, 00:42:57.037 "auth": { 00:42:57.037 "state": "completed", 00:42:57.037 "digest": "sha256", 00:42:57.037 "dhgroup": "ffdhe8192" 00:42:57.037 } 00:42:57.037 } 00:42:57.037 ]' 00:42:57.037 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:57.295 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:57.553 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:57.553 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:42:58.119 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:58.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:58.119 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:42:58.119 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.119 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:58.394 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.394 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:58.394 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:58.394 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:58.651 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:59.218 00:42:59.218 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:59.218 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:59.218 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:59.475 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:59.475 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:59.475 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.475 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.475 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.475 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:59.475 { 00:42:59.475 "cntlid": 45, 00:42:59.475 "qid": 0, 00:42:59.476 "state": "enabled", 00:42:59.476 "thread": "nvmf_tgt_poll_group_000", 00:42:59.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:42:59.476 "listen_address": { 00:42:59.476 "trtype": "TCP", 00:42:59.476 "adrfam": "IPv4", 00:42:59.476 "traddr": "10.0.0.3", 00:42:59.476 "trsvcid": "4420" 00:42:59.476 }, 00:42:59.476 "peer_address": { 00:42:59.476 "trtype": "TCP", 00:42:59.476 "adrfam": "IPv4", 00:42:59.476 "traddr": "10.0.0.1", 00:42:59.476 "trsvcid": "49772" 00:42:59.476 }, 00:42:59.476 "auth": { 00:42:59.476 "state": "completed", 00:42:59.476 "digest": "sha256", 00:42:59.476 "dhgroup": "ffdhe8192" 00:42:59.476 } 00:42:59.476 } 00:42:59.476 ]' 00:42:59.476 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:59.476 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:59.476 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:59.476 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:59.476 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:59.734 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:59.734 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:59.734 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:59.993 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:42:59.993 05:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:00.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:00.562 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:01.130 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:01.699 00:43:01.699 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:01.699 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:01.699 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:01.979 { 00:43:01.979 "cntlid": 47, 00:43:01.979 "qid": 0, 00:43:01.979 "state": "enabled", 00:43:01.979 "thread": "nvmf_tgt_poll_group_000", 00:43:01.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:01.979 "listen_address": { 00:43:01.979 "trtype": "TCP", 00:43:01.979 "adrfam": "IPv4", 00:43:01.979 "traddr": "10.0.0.3", 00:43:01.979 "trsvcid": "4420" 00:43:01.979 }, 00:43:01.979 "peer_address": { 00:43:01.979 "trtype": "TCP", 00:43:01.979 "adrfam": "IPv4", 00:43:01.979 "traddr": "10.0.0.1", 00:43:01.979 "trsvcid": "49800" 00:43:01.979 }, 00:43:01.979 "auth": { 00:43:01.979 "state": "completed", 00:43:01.979 "digest": "sha256", 00:43:01.979 "dhgroup": "ffdhe8192" 00:43:01.979 } 00:43:01.979 } 00:43:01.979 ]' 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:01.979 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:02.246 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:02.246 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:02.246 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:02.247 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:02.247 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:03.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:03.180 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:03.438 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:03.695 00:43:03.695 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:03.695 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:03.695 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:03.953 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:03.953 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:03.953 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.953 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:04.211 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.211 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:04.211 { 00:43:04.211 "cntlid": 49, 00:43:04.211 "qid": 0, 00:43:04.211 "state": "enabled", 00:43:04.211 "thread": "nvmf_tgt_poll_group_000", 00:43:04.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:04.211 "listen_address": { 00:43:04.211 "trtype": "TCP", 00:43:04.211 "adrfam": "IPv4", 00:43:04.211 "traddr": "10.0.0.3", 00:43:04.211 "trsvcid": "4420" 00:43:04.211 }, 00:43:04.211 "peer_address": { 00:43:04.211 "trtype": "TCP", 00:43:04.211 "adrfam": "IPv4", 00:43:04.211 "traddr": "10.0.0.1", 00:43:04.211 "trsvcid": "49838" 00:43:04.211 }, 00:43:04.211 "auth": { 00:43:04.211 "state": "completed", 00:43:04.211 "digest": "sha384", 00:43:04.211 "dhgroup": "null" 00:43:04.211 } 00:43:04.211 } 00:43:04.211 ]' 00:43:04.211 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:04.211 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:04.211 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:04.211 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:04.211 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:04.211 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:04.211 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:04.211 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:04.468 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:04.468 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:05.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.400 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.657 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.658 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:05.658 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:05.658 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:05.914 00:43:05.914 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:05.914 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:05.914 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:06.173 { 00:43:06.173 "cntlid": 51, 00:43:06.173 "qid": 0, 00:43:06.173 "state": "enabled", 00:43:06.173 "thread": "nvmf_tgt_poll_group_000", 00:43:06.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:06.173 "listen_address": { 00:43:06.173 "trtype": "TCP", 00:43:06.173 "adrfam": "IPv4", 00:43:06.173 "traddr": "10.0.0.3", 00:43:06.173 "trsvcid": "4420" 00:43:06.173 }, 00:43:06.173 "peer_address": { 00:43:06.173 "trtype": "TCP", 00:43:06.173 "adrfam": "IPv4", 00:43:06.173 "traddr": "10.0.0.1", 00:43:06.173 "trsvcid": "56532" 00:43:06.173 }, 00:43:06.173 "auth": { 00:43:06.173 "state": "completed", 00:43:06.173 "digest": "sha384", 00:43:06.173 "dhgroup": "null" 00:43:06.173 } 00:43:06.173 } 00:43:06.173 ]' 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:06.173 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:06.431 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:06.431 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:06.431 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:06.689 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:06.689 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:07.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:07.256 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:07.823 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:08.082 00:43:08.082 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:08.082 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:08.082 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:08.340 { 00:43:08.340 "cntlid": 53, 00:43:08.340 "qid": 0, 00:43:08.340 "state": "enabled", 00:43:08.340 "thread": "nvmf_tgt_poll_group_000", 00:43:08.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:08.340 "listen_address": { 00:43:08.340 "trtype": "TCP", 00:43:08.340 "adrfam": "IPv4", 00:43:08.340 "traddr": "10.0.0.3", 00:43:08.340 "trsvcid": "4420" 00:43:08.340 }, 00:43:08.340 "peer_address": { 00:43:08.340 "trtype": "TCP", 00:43:08.340 "adrfam": "IPv4", 00:43:08.340 "traddr": "10.0.0.1", 00:43:08.340 "trsvcid": "56558" 00:43:08.340 }, 00:43:08.340 "auth": { 00:43:08.340 "state": "completed", 00:43:08.340 "digest": "sha384", 00:43:08.340 "dhgroup": "null" 00:43:08.340 } 00:43:08.340 } 00:43:08.340 ]' 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:08.340 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:08.905 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:08.905 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:09.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:09.471 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:09.729 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:09.985 00:43:09.985 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:09.985 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:09.985 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:10.242 { 00:43:10.242 "cntlid": 55, 00:43:10.242 "qid": 0, 00:43:10.242 "state": "enabled", 00:43:10.242 "thread": "nvmf_tgt_poll_group_000", 00:43:10.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:10.242 "listen_address": { 00:43:10.242 "trtype": "TCP", 00:43:10.242 "adrfam": "IPv4", 00:43:10.242 "traddr": "10.0.0.3", 00:43:10.242 "trsvcid": "4420" 00:43:10.242 }, 00:43:10.242 "peer_address": { 00:43:10.242 "trtype": "TCP", 00:43:10.242 "adrfam": "IPv4", 00:43:10.242 "traddr": "10.0.0.1", 00:43:10.242 "trsvcid": "56586" 00:43:10.242 }, 00:43:10.242 "auth": { 00:43:10.242 "state": "completed", 00:43:10.242 "digest": "sha384", 00:43:10.242 "dhgroup": "null" 00:43:10.242 } 00:43:10.242 } 00:43:10.242 ]' 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:10.242 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:10.501 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:10.501 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:10.501 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:10.501 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:10.501 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:10.760 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:10.760 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:11.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:11.328 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:11.587 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:12.154 00:43:12.154 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:12.154 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:12.154 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:12.413 { 00:43:12.413 "cntlid": 57, 00:43:12.413 "qid": 0, 00:43:12.413 "state": "enabled", 00:43:12.413 "thread": "nvmf_tgt_poll_group_000", 00:43:12.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:12.413 "listen_address": { 00:43:12.413 "trtype": "TCP", 00:43:12.413 "adrfam": "IPv4", 00:43:12.413 "traddr": "10.0.0.3", 00:43:12.413 "trsvcid": "4420" 00:43:12.413 }, 00:43:12.413 "peer_address": { 00:43:12.413 "trtype": "TCP", 00:43:12.413 "adrfam": "IPv4", 00:43:12.413 "traddr": "10.0.0.1", 00:43:12.413 "trsvcid": "56616" 00:43:12.413 }, 00:43:12.413 "auth": { 00:43:12.413 "state": "completed", 00:43:12.413 "digest": "sha384", 00:43:12.413 "dhgroup": "ffdhe2048" 00:43:12.413 } 00:43:12.413 } 00:43:12.413 ]' 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:12.413 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:12.671 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:12.671 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:13.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:13.604 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.863 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:14.121 00:43:14.121 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:14.121 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:14.121 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:14.379 { 00:43:14.379 "cntlid": 59, 00:43:14.379 "qid": 0, 00:43:14.379 "state": "enabled", 00:43:14.379 "thread": "nvmf_tgt_poll_group_000", 00:43:14.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:14.379 "listen_address": { 00:43:14.379 "trtype": "TCP", 00:43:14.379 "adrfam": "IPv4", 00:43:14.379 "traddr": "10.0.0.3", 00:43:14.379 "trsvcid": "4420" 00:43:14.379 }, 00:43:14.379 "peer_address": { 00:43:14.379 "trtype": "TCP", 00:43:14.379 "adrfam": "IPv4", 00:43:14.379 "traddr": "10.0.0.1", 00:43:14.379 "trsvcid": "56642" 00:43:14.379 }, 00:43:14.379 "auth": { 00:43:14.379 "state": "completed", 00:43:14.379 "digest": "sha384", 00:43:14.379 "dhgroup": "ffdhe2048" 00:43:14.379 } 00:43:14.379 } 00:43:14.379 ]' 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:14.379 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:14.635 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:14.636 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:14.636 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:14.636 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:14.636 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:14.893 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:14.893 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:15.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:15.473 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.729 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:16.292 00:43:16.292 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:16.292 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:16.292 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:16.549 { 00:43:16.549 "cntlid": 61, 00:43:16.549 "qid": 0, 00:43:16.549 "state": "enabled", 00:43:16.549 "thread": "nvmf_tgt_poll_group_000", 00:43:16.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:16.549 "listen_address": { 00:43:16.549 "trtype": "TCP", 00:43:16.549 "adrfam": "IPv4", 00:43:16.549 "traddr": "10.0.0.3", 00:43:16.549 "trsvcid": "4420" 00:43:16.549 }, 00:43:16.549 "peer_address": { 00:43:16.549 "trtype": "TCP", 00:43:16.549 "adrfam": "IPv4", 00:43:16.549 "traddr": "10.0.0.1", 00:43:16.549 "trsvcid": "42694" 00:43:16.549 }, 00:43:16.549 "auth": { 00:43:16.549 "state": "completed", 00:43:16.549 "digest": "sha384", 00:43:16.549 "dhgroup": "ffdhe2048" 00:43:16.549 } 00:43:16.549 } 00:43:16.549 ]' 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:16.549 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:17.171 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:17.171 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:17.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:17.783 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:17.784 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:18.042 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:18.301 00:43:18.301 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:18.301 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:18.301 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:18.560 { 00:43:18.560 "cntlid": 63, 00:43:18.560 "qid": 0, 00:43:18.560 "state": "enabled", 00:43:18.560 "thread": "nvmf_tgt_poll_group_000", 00:43:18.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:18.560 "listen_address": { 00:43:18.560 "trtype": "TCP", 00:43:18.560 "adrfam": "IPv4", 00:43:18.560 "traddr": "10.0.0.3", 00:43:18.560 "trsvcid": "4420" 00:43:18.560 }, 00:43:18.560 "peer_address": { 00:43:18.560 "trtype": "TCP", 00:43:18.560 "adrfam": "IPv4", 00:43:18.560 "traddr": "10.0.0.1", 00:43:18.560 "trsvcid": "42724" 00:43:18.560 }, 00:43:18.560 "auth": { 00:43:18.560 "state": "completed", 00:43:18.560 "digest": "sha384", 00:43:18.560 "dhgroup": "ffdhe2048" 00:43:18.560 } 00:43:18.560 } 00:43:18.560 ]' 00:43:18.560 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:18.818 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:19.077 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:19.077 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:19.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:19.642 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.901 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:20.468 00:43:20.468 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:20.468 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:20.468 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:20.726 { 00:43:20.726 "cntlid": 65, 00:43:20.726 "qid": 0, 00:43:20.726 "state": "enabled", 00:43:20.726 "thread": "nvmf_tgt_poll_group_000", 00:43:20.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:20.726 "listen_address": { 00:43:20.726 "trtype": "TCP", 00:43:20.726 "adrfam": "IPv4", 00:43:20.726 "traddr": "10.0.0.3", 00:43:20.726 "trsvcid": "4420" 00:43:20.726 }, 00:43:20.726 "peer_address": { 00:43:20.726 "trtype": "TCP", 00:43:20.726 "adrfam": "IPv4", 00:43:20.726 "traddr": "10.0.0.1", 00:43:20.726 "trsvcid": "42756" 00:43:20.726 }, 00:43:20.726 "auth": { 00:43:20.726 "state": "completed", 00:43:20.726 "digest": "sha384", 00:43:20.726 "dhgroup": "ffdhe3072" 00:43:20.726 } 00:43:20.726 } 00:43:20.726 ]' 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:20.726 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:21.292 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:21.292 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:21.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:21.860 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:22.119 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:22.378 00:43:22.638 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:22.638 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:22.638 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:22.897 { 00:43:22.897 "cntlid": 67, 00:43:22.897 "qid": 0, 00:43:22.897 "state": "enabled", 00:43:22.897 "thread": "nvmf_tgt_poll_group_000", 00:43:22.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:22.897 "listen_address": { 00:43:22.897 "trtype": "TCP", 00:43:22.897 "adrfam": "IPv4", 00:43:22.897 "traddr": "10.0.0.3", 00:43:22.897 "trsvcid": "4420" 00:43:22.897 }, 00:43:22.897 "peer_address": { 00:43:22.897 "trtype": "TCP", 00:43:22.897 "adrfam": "IPv4", 00:43:22.897 "traddr": "10.0.0.1", 00:43:22.897 "trsvcid": "42776" 00:43:22.897 }, 00:43:22.897 "auth": { 00:43:22.897 "state": "completed", 00:43:22.897 "digest": "sha384", 00:43:22.897 "dhgroup": "ffdhe3072" 00:43:22.897 } 00:43:22.897 } 00:43:22.897 ]' 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:22.897 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:23.163 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:23.163 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:24.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:24.098 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.098 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.664 00:43:24.664 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:24.664 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:24.664 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:24.922 { 00:43:24.922 "cntlid": 69, 00:43:24.922 "qid": 0, 00:43:24.922 "state": "enabled", 00:43:24.922 "thread": "nvmf_tgt_poll_group_000", 00:43:24.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:24.922 "listen_address": { 00:43:24.922 "trtype": "TCP", 00:43:24.922 "adrfam": "IPv4", 00:43:24.922 "traddr": "10.0.0.3", 00:43:24.922 "trsvcid": "4420" 00:43:24.922 }, 00:43:24.922 "peer_address": { 00:43:24.922 "trtype": "TCP", 00:43:24.922 "adrfam": "IPv4", 00:43:24.922 "traddr": "10.0.0.1", 00:43:24.922 "trsvcid": "37884" 00:43:24.922 }, 00:43:24.922 "auth": { 00:43:24.922 "state": "completed", 00:43:24.922 "digest": "sha384", 00:43:24.922 "dhgroup": "ffdhe3072" 00:43:24.922 } 00:43:24.922 } 00:43:24.922 ]' 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:24.922 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:25.180 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:25.180 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:26.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:26.115 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:26.115 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:26.680 00:43:26.680 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:26.680 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:26.680 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:26.939 { 00:43:26.939 "cntlid": 71, 00:43:26.939 "qid": 0, 00:43:26.939 "state": "enabled", 00:43:26.939 "thread": "nvmf_tgt_poll_group_000", 00:43:26.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:26.939 "listen_address": { 00:43:26.939 "trtype": "TCP", 00:43:26.939 "adrfam": "IPv4", 00:43:26.939 "traddr": "10.0.0.3", 00:43:26.939 "trsvcid": "4420" 00:43:26.939 }, 00:43:26.939 "peer_address": { 00:43:26.939 "trtype": "TCP", 00:43:26.939 "adrfam": "IPv4", 00:43:26.939 "traddr": "10.0.0.1", 00:43:26.939 "trsvcid": "37920" 00:43:26.939 }, 00:43:26.939 "auth": { 00:43:26.939 "state": "completed", 00:43:26.939 "digest": "sha384", 00:43:26.939 "dhgroup": "ffdhe3072" 00:43:26.939 } 00:43:26.939 } 00:43:26.939 ]' 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:26.939 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:27.506 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:27.506 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:28.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:28.074 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:28.344 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:43:28.344 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.345 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.602 00:43:28.860 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:28.860 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:28.860 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:29.119 { 00:43:29.119 "cntlid": 73, 00:43:29.119 "qid": 0, 00:43:29.119 "state": "enabled", 00:43:29.119 "thread": "nvmf_tgt_poll_group_000", 00:43:29.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:29.119 "listen_address": { 00:43:29.119 "trtype": "TCP", 00:43:29.119 "adrfam": "IPv4", 00:43:29.119 "traddr": "10.0.0.3", 00:43:29.119 "trsvcid": "4420" 00:43:29.119 }, 00:43:29.119 "peer_address": { 00:43:29.119 "trtype": "TCP", 00:43:29.119 "adrfam": "IPv4", 00:43:29.119 "traddr": "10.0.0.1", 00:43:29.119 "trsvcid": "37946" 00:43:29.119 }, 00:43:29.119 "auth": { 00:43:29.119 "state": "completed", 00:43:29.119 "digest": "sha384", 00:43:29.119 "dhgroup": "ffdhe4096" 00:43:29.119 } 00:43:29.119 } 00:43:29.119 ]' 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:29.119 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:29.119 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:29.119 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:29.119 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:29.119 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:29.119 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:29.685 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:29.685 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:30.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:30.253 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.512 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:31.080 00:43:31.080 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:31.080 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:31.080 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:31.339 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:31.339 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:31.339 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.339 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:31.339 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.339 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:31.339 { 00:43:31.339 "cntlid": 75, 00:43:31.339 "qid": 0, 00:43:31.339 "state": "enabled", 00:43:31.339 "thread": "nvmf_tgt_poll_group_000", 00:43:31.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:31.340 "listen_address": { 00:43:31.340 "trtype": "TCP", 00:43:31.340 "adrfam": "IPv4", 00:43:31.340 "traddr": "10.0.0.3", 00:43:31.340 "trsvcid": "4420" 00:43:31.340 }, 00:43:31.340 "peer_address": { 00:43:31.340 "trtype": "TCP", 00:43:31.340 "adrfam": "IPv4", 00:43:31.340 "traddr": "10.0.0.1", 00:43:31.340 "trsvcid": "37966" 00:43:31.340 }, 00:43:31.340 "auth": { 00:43:31.340 "state": "completed", 00:43:31.340 "digest": "sha384", 00:43:31.340 "dhgroup": "ffdhe4096" 00:43:31.340 } 00:43:31.340 } 00:43:31.340 ]' 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:31.340 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:31.600 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:31.600 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:32.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:32.570 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:32.839 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:33.098 00:43:33.098 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:33.098 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:33.098 05:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:33.357 { 00:43:33.357 "cntlid": 77, 00:43:33.357 "qid": 0, 00:43:33.357 "state": "enabled", 00:43:33.357 "thread": "nvmf_tgt_poll_group_000", 00:43:33.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:33.357 "listen_address": { 00:43:33.357 "trtype": "TCP", 00:43:33.357 "adrfam": "IPv4", 00:43:33.357 "traddr": "10.0.0.3", 00:43:33.357 "trsvcid": "4420" 00:43:33.357 }, 00:43:33.357 "peer_address": { 00:43:33.357 "trtype": "TCP", 00:43:33.357 "adrfam": "IPv4", 00:43:33.357 "traddr": "10.0.0.1", 00:43:33.357 "trsvcid": "37994" 00:43:33.357 }, 00:43:33.357 "auth": { 00:43:33.357 "state": "completed", 00:43:33.357 "digest": "sha384", 00:43:33.357 "dhgroup": "ffdhe4096" 00:43:33.357 } 00:43:33.357 } 00:43:33.357 ]' 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:33.357 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:33.617 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:33.617 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:33.617 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:33.617 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:33.617 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:33.876 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:33.876 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:34.443 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:34.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:34.702 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:34.959 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:35.218 00:43:35.218 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:35.218 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:35.218 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:35.784 { 00:43:35.784 "cntlid": 79, 00:43:35.784 "qid": 0, 00:43:35.784 "state": "enabled", 00:43:35.784 "thread": "nvmf_tgt_poll_group_000", 00:43:35.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:35.784 "listen_address": { 00:43:35.784 "trtype": "TCP", 00:43:35.784 "adrfam": "IPv4", 00:43:35.784 "traddr": "10.0.0.3", 00:43:35.784 "trsvcid": "4420" 00:43:35.784 }, 00:43:35.784 "peer_address": { 00:43:35.784 "trtype": "TCP", 00:43:35.784 "adrfam": "IPv4", 00:43:35.784 "traddr": "10.0.0.1", 00:43:35.784 "trsvcid": "42344" 00:43:35.784 }, 00:43:35.784 "auth": { 00:43:35.784 "state": "completed", 00:43:35.784 "digest": "sha384", 00:43:35.784 "dhgroup": "ffdhe4096" 00:43:35.784 } 00:43:35.784 } 00:43:35.784 ]' 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:35.784 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:36.042 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:36.042 05:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:36.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.972 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:37.537 00:43:37.537 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:37.537 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:37.537 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:38.103 { 00:43:38.103 "cntlid": 81, 00:43:38.103 "qid": 0, 00:43:38.103 "state": "enabled", 00:43:38.103 "thread": "nvmf_tgt_poll_group_000", 00:43:38.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:38.103 "listen_address": { 00:43:38.103 "trtype": "TCP", 00:43:38.103 "adrfam": "IPv4", 00:43:38.103 "traddr": "10.0.0.3", 00:43:38.103 "trsvcid": "4420" 00:43:38.103 }, 00:43:38.103 "peer_address": { 00:43:38.103 "trtype": "TCP", 00:43:38.103 "adrfam": "IPv4", 00:43:38.103 "traddr": "10.0.0.1", 00:43:38.103 "trsvcid": "42370" 00:43:38.103 }, 00:43:38.103 "auth": { 00:43:38.103 "state": "completed", 00:43:38.103 "digest": "sha384", 00:43:38.103 "dhgroup": "ffdhe6144" 00:43:38.103 } 00:43:38.103 } 00:43:38.103 ]' 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:38.103 05:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:38.361 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:38.361 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:39.295 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:39.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:39.296 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.555 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.813 00:43:39.813 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:39.813 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:39.813 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:40.382 { 00:43:40.382 "cntlid": 83, 00:43:40.382 "qid": 0, 00:43:40.382 "state": "enabled", 00:43:40.382 "thread": "nvmf_tgt_poll_group_000", 00:43:40.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:40.382 "listen_address": { 00:43:40.382 "trtype": "TCP", 00:43:40.382 "adrfam": "IPv4", 00:43:40.382 "traddr": "10.0.0.3", 00:43:40.382 "trsvcid": "4420" 00:43:40.382 }, 00:43:40.382 "peer_address": { 00:43:40.382 "trtype": "TCP", 00:43:40.382 "adrfam": "IPv4", 00:43:40.382 "traddr": "10.0.0.1", 00:43:40.382 "trsvcid": "42386" 00:43:40.382 }, 00:43:40.382 "auth": { 00:43:40.382 "state": "completed", 00:43:40.382 "digest": "sha384", 00:43:40.382 "dhgroup": "ffdhe6144" 00:43:40.382 } 00:43:40.382 } 00:43:40.382 ]' 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:40.382 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:40.641 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:40.641 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:41.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:41.578 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:41.837 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:42.404 00:43:42.404 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:42.404 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:42.404 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:42.664 { 00:43:42.664 "cntlid": 85, 00:43:42.664 "qid": 0, 00:43:42.664 "state": "enabled", 00:43:42.664 "thread": "nvmf_tgt_poll_group_000", 00:43:42.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:42.664 "listen_address": { 00:43:42.664 "trtype": "TCP", 00:43:42.664 "adrfam": "IPv4", 00:43:42.664 "traddr": "10.0.0.3", 00:43:42.664 "trsvcid": "4420" 00:43:42.664 }, 00:43:42.664 "peer_address": { 00:43:42.664 "trtype": "TCP", 00:43:42.664 "adrfam": "IPv4", 00:43:42.664 "traddr": "10.0.0.1", 00:43:42.664 "trsvcid": "42428" 00:43:42.664 }, 00:43:42.664 "auth": { 00:43:42.664 "state": "completed", 00:43:42.664 "digest": "sha384", 00:43:42.664 "dhgroup": "ffdhe6144" 00:43:42.664 } 00:43:42.664 } 00:43:42.664 ]' 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:42.664 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:42.923 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:42.923 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:43.860 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:43.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:43.860 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:43.860 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.860 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.860 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.860 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:43.861 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:43.861 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:44.120 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:44.380 00:43:44.639 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:44.639 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:44.639 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:44.899 { 00:43:44.899 "cntlid": 87, 00:43:44.899 "qid": 0, 00:43:44.899 "state": "enabled", 00:43:44.899 "thread": "nvmf_tgt_poll_group_000", 00:43:44.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:44.899 "listen_address": { 00:43:44.899 "trtype": "TCP", 00:43:44.899 "adrfam": "IPv4", 00:43:44.899 "traddr": "10.0.0.3", 00:43:44.899 "trsvcid": "4420" 00:43:44.899 }, 00:43:44.899 "peer_address": { 00:43:44.899 "trtype": "TCP", 00:43:44.899 "adrfam": "IPv4", 00:43:44.899 "traddr": "10.0.0.1", 00:43:44.899 "trsvcid": "42444" 00:43:44.899 }, 00:43:44.899 "auth": { 00:43:44.899 "state": "completed", 00:43:44.899 "digest": "sha384", 00:43:44.899 "dhgroup": "ffdhe6144" 00:43:44.899 } 00:43:44.899 } 00:43:44.899 ]' 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:44.899 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:44.900 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:44.900 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:44.900 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:44.900 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:44.900 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:44.900 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:45.158 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:45.158 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:46.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:46.094 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.363 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.929 00:43:46.929 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:46.929 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:46.929 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:47.188 { 00:43:47.188 "cntlid": 89, 00:43:47.188 "qid": 0, 00:43:47.188 "state": "enabled", 00:43:47.188 "thread": "nvmf_tgt_poll_group_000", 00:43:47.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:47.188 "listen_address": { 00:43:47.188 "trtype": "TCP", 00:43:47.188 "adrfam": "IPv4", 00:43:47.188 "traddr": "10.0.0.3", 00:43:47.188 "trsvcid": "4420" 00:43:47.188 }, 00:43:47.188 "peer_address": { 00:43:47.188 "trtype": "TCP", 00:43:47.188 "adrfam": "IPv4", 00:43:47.188 "traddr": "10.0.0.1", 00:43:47.188 "trsvcid": "38186" 00:43:47.188 }, 00:43:47.188 "auth": { 00:43:47.188 "state": "completed", 00:43:47.188 "digest": "sha384", 00:43:47.188 "dhgroup": "ffdhe8192" 00:43:47.188 } 00:43:47.188 } 00:43:47.188 ]' 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:47.188 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:47.446 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:47.446 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:47.446 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:47.704 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:47.704 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:48.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:48.270 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:48.528 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:49.094 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.353 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:49.611 { 00:43:49.611 "cntlid": 91, 00:43:49.611 "qid": 0, 00:43:49.611 "state": "enabled", 00:43:49.611 "thread": "nvmf_tgt_poll_group_000", 00:43:49.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:49.611 "listen_address": { 00:43:49.611 "trtype": "TCP", 00:43:49.611 "adrfam": "IPv4", 00:43:49.611 "traddr": "10.0.0.3", 00:43:49.611 "trsvcid": "4420" 00:43:49.611 }, 00:43:49.611 "peer_address": { 00:43:49.611 "trtype": "TCP", 00:43:49.611 "adrfam": "IPv4", 00:43:49.611 "traddr": "10.0.0.1", 00:43:49.611 "trsvcid": "38208" 00:43:49.611 }, 00:43:49.611 "auth": { 00:43:49.611 "state": "completed", 00:43:49.611 "digest": "sha384", 00:43:49.611 "dhgroup": "ffdhe8192" 00:43:49.611 } 00:43:49.611 } 00:43:49.611 ]' 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:49.611 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:49.869 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:49.869 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:50.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:50.435 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:50.694 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:50.695 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.695 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:50.695 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.695 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:50.695 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:50.695 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.629 00:43:51.629 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:51.629 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:51.629 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:51.888 { 00:43:51.888 "cntlid": 93, 00:43:51.888 "qid": 0, 00:43:51.888 "state": "enabled", 00:43:51.888 "thread": "nvmf_tgt_poll_group_000", 00:43:51.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:51.888 "listen_address": { 00:43:51.888 "trtype": "TCP", 00:43:51.888 "adrfam": "IPv4", 00:43:51.888 "traddr": "10.0.0.3", 00:43:51.888 "trsvcid": "4420" 00:43:51.888 }, 00:43:51.888 "peer_address": { 00:43:51.888 "trtype": "TCP", 00:43:51.888 "adrfam": "IPv4", 00:43:51.888 "traddr": "10.0.0.1", 00:43:51.888 "trsvcid": "38234" 00:43:51.888 }, 00:43:51.888 "auth": { 00:43:51.888 "state": "completed", 00:43:51.888 "digest": "sha384", 00:43:51.888 "dhgroup": "ffdhe8192" 00:43:51.888 } 00:43:51.888 } 00:43:51.888 ]' 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:51.888 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:52.146 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:52.147 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:43:53.083 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:53.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.084 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.084 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.084 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:53.084 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:53.084 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:53.650 00:43:53.914 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:53.914 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:53.914 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:54.174 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:54.175 { 00:43:54.175 "cntlid": 95, 00:43:54.175 "qid": 0, 00:43:54.175 "state": "enabled", 00:43:54.175 "thread": "nvmf_tgt_poll_group_000", 00:43:54.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:54.175 "listen_address": { 00:43:54.175 "trtype": "TCP", 00:43:54.175 "adrfam": "IPv4", 00:43:54.175 "traddr": "10.0.0.3", 00:43:54.175 "trsvcid": "4420" 00:43:54.175 }, 00:43:54.175 "peer_address": { 00:43:54.175 "trtype": "TCP", 00:43:54.175 "adrfam": "IPv4", 00:43:54.175 "traddr": "10.0.0.1", 00:43:54.175 "trsvcid": "38242" 00:43:54.175 }, 00:43:54.175 "auth": { 00:43:54.175 "state": "completed", 00:43:54.175 "digest": "sha384", 00:43:54.175 "dhgroup": "ffdhe8192" 00:43:54.175 } 00:43:54.175 } 00:43:54.175 ]' 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:54.175 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:54.175 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:54.175 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:54.175 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:54.175 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:54.175 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:54.175 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:54.432 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:54.432 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:43:55.368 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:55.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:55.368 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:55.368 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.368 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:55.368 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:55.936 00:43:55.936 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:55.936 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:55.936 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:56.194 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:56.194 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:56.194 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.194 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:56.194 { 00:43:56.194 "cntlid": 97, 00:43:56.194 "qid": 0, 00:43:56.194 "state": "enabled", 00:43:56.194 "thread": "nvmf_tgt_poll_group_000", 00:43:56.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:56.194 "listen_address": { 00:43:56.194 "trtype": "TCP", 00:43:56.194 "adrfam": "IPv4", 00:43:56.194 "traddr": "10.0.0.3", 00:43:56.194 "trsvcid": "4420" 00:43:56.194 }, 00:43:56.194 "peer_address": { 00:43:56.194 "trtype": "TCP", 00:43:56.194 "adrfam": "IPv4", 00:43:56.194 "traddr": "10.0.0.1", 00:43:56.194 "trsvcid": "39112" 00:43:56.194 }, 00:43:56.194 "auth": { 00:43:56.194 "state": "completed", 00:43:56.194 "digest": "sha512", 00:43:56.194 "dhgroup": "null" 00:43:56.194 } 00:43:56.194 } 00:43:56.194 ]' 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:56.194 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:56.454 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:56.454 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:56.454 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:56.718 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:56.718 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:57.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:57.284 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:57.544 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:58.110 00:43:58.110 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:58.110 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:58.110 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:58.368 { 00:43:58.368 "cntlid": 99, 00:43:58.368 "qid": 0, 00:43:58.368 "state": "enabled", 00:43:58.368 "thread": "nvmf_tgt_poll_group_000", 00:43:58.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:43:58.368 "listen_address": { 00:43:58.368 "trtype": "TCP", 00:43:58.368 "adrfam": "IPv4", 00:43:58.368 "traddr": "10.0.0.3", 00:43:58.368 "trsvcid": "4420" 00:43:58.368 }, 00:43:58.368 "peer_address": { 00:43:58.368 "trtype": "TCP", 00:43:58.368 "adrfam": "IPv4", 00:43:58.368 "traddr": "10.0.0.1", 00:43:58.368 "trsvcid": "39138" 00:43:58.368 }, 00:43:58.368 "auth": { 00:43:58.368 "state": "completed", 00:43:58.368 "digest": "sha512", 00:43:58.368 "dhgroup": "null" 00:43:58.368 } 00:43:58.368 } 00:43:58.368 ]' 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:58.368 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:58.625 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:58.625 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:59.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:59.559 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:59.817 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:00.076 00:44:00.076 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:00.076 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:00.076 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:00.336 { 00:44:00.336 "cntlid": 101, 00:44:00.336 "qid": 0, 00:44:00.336 "state": "enabled", 00:44:00.336 "thread": "nvmf_tgt_poll_group_000", 00:44:00.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:00.336 "listen_address": { 00:44:00.336 "trtype": "TCP", 00:44:00.336 "adrfam": "IPv4", 00:44:00.336 "traddr": "10.0.0.3", 00:44:00.336 "trsvcid": "4420" 00:44:00.336 }, 00:44:00.336 "peer_address": { 00:44:00.336 "trtype": "TCP", 00:44:00.336 "adrfam": "IPv4", 00:44:00.336 "traddr": "10.0.0.1", 00:44:00.336 "trsvcid": "39168" 00:44:00.336 }, 00:44:00.336 "auth": { 00:44:00.336 "state": "completed", 00:44:00.336 "digest": "sha512", 00:44:00.336 "dhgroup": "null" 00:44:00.336 } 00:44:00.336 } 00:44:00.336 ]' 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:00.336 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:00.594 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:00.594 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:00.594 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:00.853 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:00.853 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:01.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:01.458 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:01.717 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:01.976 00:44:02.235 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:02.235 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:02.235 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:02.494 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:02.494 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:02.494 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.494 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.494 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.494 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:02.494 { 00:44:02.494 "cntlid": 103, 00:44:02.494 "qid": 0, 00:44:02.494 "state": "enabled", 00:44:02.494 "thread": "nvmf_tgt_poll_group_000", 00:44:02.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:02.494 "listen_address": { 00:44:02.494 "trtype": "TCP", 00:44:02.494 "adrfam": "IPv4", 00:44:02.494 "traddr": "10.0.0.3", 00:44:02.494 "trsvcid": "4420" 00:44:02.494 }, 00:44:02.495 "peer_address": { 00:44:02.495 "trtype": "TCP", 00:44:02.495 "adrfam": "IPv4", 00:44:02.495 "traddr": "10.0.0.1", 00:44:02.495 "trsvcid": "39190" 00:44:02.495 }, 00:44:02.495 "auth": { 00:44:02.495 "state": "completed", 00:44:02.495 "digest": "sha512", 00:44:02.495 "dhgroup": "null" 00:44:02.495 } 00:44:02.495 } 00:44:02.495 ]' 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:02.495 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:02.754 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:02.754 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:03.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:03.689 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:03.948 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:04.206 00:44:04.206 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:04.206 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:04.206 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:04.465 { 00:44:04.465 "cntlid": 105, 00:44:04.465 "qid": 0, 00:44:04.465 "state": "enabled", 00:44:04.465 "thread": "nvmf_tgt_poll_group_000", 00:44:04.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:04.465 "listen_address": { 00:44:04.465 "trtype": "TCP", 00:44:04.465 "adrfam": "IPv4", 00:44:04.465 "traddr": "10.0.0.3", 00:44:04.465 "trsvcid": "4420" 00:44:04.465 }, 00:44:04.465 "peer_address": { 00:44:04.465 "trtype": "TCP", 00:44:04.465 "adrfam": "IPv4", 00:44:04.465 "traddr": "10.0.0.1", 00:44:04.465 "trsvcid": "39222" 00:44:04.465 }, 00:44:04.465 "auth": { 00:44:04.465 "state": "completed", 00:44:04.465 "digest": "sha512", 00:44:04.465 "dhgroup": "ffdhe2048" 00:44:04.465 } 00:44:04.465 } 00:44:04.465 ]' 00:44:04.465 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:04.724 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:04.984 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:04.984 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:05.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:05.922 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:06.491 00:44:06.491 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:06.491 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:06.491 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:06.751 { 00:44:06.751 "cntlid": 107, 00:44:06.751 "qid": 0, 00:44:06.751 "state": "enabled", 00:44:06.751 "thread": "nvmf_tgt_poll_group_000", 00:44:06.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:06.751 "listen_address": { 00:44:06.751 "trtype": "TCP", 00:44:06.751 "adrfam": "IPv4", 00:44:06.751 "traddr": "10.0.0.3", 00:44:06.751 "trsvcid": "4420" 00:44:06.751 }, 00:44:06.751 "peer_address": { 00:44:06.751 "trtype": "TCP", 00:44:06.751 "adrfam": "IPv4", 00:44:06.751 "traddr": "10.0.0.1", 00:44:06.751 "trsvcid": "59480" 00:44:06.751 }, 00:44:06.751 "auth": { 00:44:06.751 "state": "completed", 00:44:06.751 "digest": "sha512", 00:44:06.751 "dhgroup": "ffdhe2048" 00:44:06.751 } 00:44:06.751 } 00:44:06.751 ]' 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:06.751 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:07.012 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:07.012 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:07.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:07.580 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:07.840 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:08.406 00:44:08.406 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:08.406 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:08.406 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:08.664 { 00:44:08.664 "cntlid": 109, 00:44:08.664 "qid": 0, 00:44:08.664 "state": "enabled", 00:44:08.664 "thread": "nvmf_tgt_poll_group_000", 00:44:08.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:08.664 "listen_address": { 00:44:08.664 "trtype": "TCP", 00:44:08.664 "adrfam": "IPv4", 00:44:08.664 "traddr": "10.0.0.3", 00:44:08.664 "trsvcid": "4420" 00:44:08.664 }, 00:44:08.664 "peer_address": { 00:44:08.664 "trtype": "TCP", 00:44:08.664 "adrfam": "IPv4", 00:44:08.664 "traddr": "10.0.0.1", 00:44:08.664 "trsvcid": "59510" 00:44:08.664 }, 00:44:08.664 "auth": { 00:44:08.664 "state": "completed", 00:44:08.664 "digest": "sha512", 00:44:08.664 "dhgroup": "ffdhe2048" 00:44:08.664 } 00:44:08.664 } 00:44:08.664 ]' 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:08.664 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:08.923 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:08.923 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:09.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:09.855 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:10.112 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:10.370 00:44:10.370 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:10.370 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:10.370 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:10.628 { 00:44:10.628 "cntlid": 111, 00:44:10.628 "qid": 0, 00:44:10.628 "state": "enabled", 00:44:10.628 "thread": "nvmf_tgt_poll_group_000", 00:44:10.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:10.628 "listen_address": { 00:44:10.628 "trtype": "TCP", 00:44:10.628 "adrfam": "IPv4", 00:44:10.628 "traddr": "10.0.0.3", 00:44:10.628 "trsvcid": "4420" 00:44:10.628 }, 00:44:10.628 "peer_address": { 00:44:10.628 "trtype": "TCP", 00:44:10.628 "adrfam": "IPv4", 00:44:10.628 "traddr": "10.0.0.1", 00:44:10.628 "trsvcid": "59538" 00:44:10.628 }, 00:44:10.628 "auth": { 00:44:10.628 "state": "completed", 00:44:10.628 "digest": "sha512", 00:44:10.628 "dhgroup": "ffdhe2048" 00:44:10.628 } 00:44:10.628 } 00:44:10.628 ]' 00:44:10.628 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:10.898 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:11.156 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:11.156 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:11.723 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:11.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:11.724 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:11.982 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:12.548 00:44:12.548 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:12.548 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:12.548 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:12.807 { 00:44:12.807 "cntlid": 113, 00:44:12.807 "qid": 0, 00:44:12.807 "state": "enabled", 00:44:12.807 "thread": "nvmf_tgt_poll_group_000", 00:44:12.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:12.807 "listen_address": { 00:44:12.807 "trtype": "TCP", 00:44:12.807 "adrfam": "IPv4", 00:44:12.807 "traddr": "10.0.0.3", 00:44:12.807 "trsvcid": "4420" 00:44:12.807 }, 00:44:12.807 "peer_address": { 00:44:12.807 "trtype": "TCP", 00:44:12.807 "adrfam": "IPv4", 00:44:12.807 "traddr": "10.0.0.1", 00:44:12.807 "trsvcid": "59562" 00:44:12.807 }, 00:44:12.807 "auth": { 00:44:12.807 "state": "completed", 00:44:12.807 "digest": "sha512", 00:44:12.807 "dhgroup": "ffdhe3072" 00:44:12.807 } 00:44:12.807 } 00:44:12.807 ]' 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:12.807 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:13.065 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:13.065 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:13.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:13.999 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:14.258 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:14.517 00:44:14.517 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:14.517 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:14.517 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:14.776 { 00:44:14.776 "cntlid": 115, 00:44:14.776 "qid": 0, 00:44:14.776 "state": "enabled", 00:44:14.776 "thread": "nvmf_tgt_poll_group_000", 00:44:14.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:14.776 "listen_address": { 00:44:14.776 "trtype": "TCP", 00:44:14.776 "adrfam": "IPv4", 00:44:14.776 "traddr": "10.0.0.3", 00:44:14.776 "trsvcid": "4420" 00:44:14.776 }, 00:44:14.776 "peer_address": { 00:44:14.776 "trtype": "TCP", 00:44:14.776 "adrfam": "IPv4", 00:44:14.776 "traddr": "10.0.0.1", 00:44:14.776 "trsvcid": "53656" 00:44:14.776 }, 00:44:14.776 "auth": { 00:44:14.776 "state": "completed", 00:44:14.776 "digest": "sha512", 00:44:14.776 "dhgroup": "ffdhe3072" 00:44:14.776 } 00:44:14.776 } 00:44:14.776 ]' 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:14.776 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:15.035 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:15.035 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:15.035 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:15.035 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:15.035 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:15.294 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:15.294 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:15.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:15.889 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.198 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.457 00:44:16.457 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:16.457 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:16.457 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:16.716 { 00:44:16.716 "cntlid": 117, 00:44:16.716 "qid": 0, 00:44:16.716 "state": "enabled", 00:44:16.716 "thread": "nvmf_tgt_poll_group_000", 00:44:16.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:16.716 "listen_address": { 00:44:16.716 "trtype": "TCP", 00:44:16.716 "adrfam": "IPv4", 00:44:16.716 "traddr": "10.0.0.3", 00:44:16.716 "trsvcid": "4420" 00:44:16.716 }, 00:44:16.716 "peer_address": { 00:44:16.716 "trtype": "TCP", 00:44:16.716 "adrfam": "IPv4", 00:44:16.716 "traddr": "10.0.0.1", 00:44:16.716 "trsvcid": "53680" 00:44:16.716 }, 00:44:16.716 "auth": { 00:44:16.716 "state": "completed", 00:44:16.716 "digest": "sha512", 00:44:16.716 "dhgroup": "ffdhe3072" 00:44:16.716 } 00:44:16.716 } 00:44:16.716 ]' 00:44:16.716 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:16.717 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:16.717 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:16.717 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:16.717 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:16.976 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:16.976 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:16.976 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:17.234 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:17.234 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:17.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:17.800 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:18.057 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:18.315 00:44:18.573 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:18.573 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:18.574 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:18.574 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:18.574 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:18.574 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.574 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:18.831 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.831 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:18.831 { 00:44:18.831 "cntlid": 119, 00:44:18.831 "qid": 0, 00:44:18.831 "state": "enabled", 00:44:18.831 "thread": "nvmf_tgt_poll_group_000", 00:44:18.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:18.831 "listen_address": { 00:44:18.831 "trtype": "TCP", 00:44:18.831 "adrfam": "IPv4", 00:44:18.831 "traddr": "10.0.0.3", 00:44:18.831 "trsvcid": "4420" 00:44:18.831 }, 00:44:18.831 "peer_address": { 00:44:18.831 "trtype": "TCP", 00:44:18.831 "adrfam": "IPv4", 00:44:18.831 "traddr": "10.0.0.1", 00:44:18.831 "trsvcid": "53700" 00:44:18.831 }, 00:44:18.831 "auth": { 00:44:18.831 "state": "completed", 00:44:18.831 "digest": "sha512", 00:44:18.831 "dhgroup": "ffdhe3072" 00:44:18.831 } 00:44:18.831 } 00:44:18.831 ]' 00:44:18.831 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:18.831 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:18.831 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:18.832 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:18.832 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:18.832 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:18.832 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:18.832 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:19.090 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:19.090 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:20.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:20.023 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:20.645 00:44:20.645 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:20.645 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:20.645 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:20.903 { 00:44:20.903 "cntlid": 121, 00:44:20.903 "qid": 0, 00:44:20.903 "state": "enabled", 00:44:20.903 "thread": "nvmf_tgt_poll_group_000", 00:44:20.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:20.903 "listen_address": { 00:44:20.903 "trtype": "TCP", 00:44:20.903 "adrfam": "IPv4", 00:44:20.903 "traddr": "10.0.0.3", 00:44:20.903 "trsvcid": "4420" 00:44:20.903 }, 00:44:20.903 "peer_address": { 00:44:20.903 "trtype": "TCP", 00:44:20.903 "adrfam": "IPv4", 00:44:20.903 "traddr": "10.0.0.1", 00:44:20.903 "trsvcid": "53736" 00:44:20.903 }, 00:44:20.903 "auth": { 00:44:20.903 "state": "completed", 00:44:20.903 "digest": "sha512", 00:44:20.903 "dhgroup": "ffdhe4096" 00:44:20.903 } 00:44:20.903 } 00:44:20.903 ]' 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:20.903 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:21.161 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:21.161 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:22.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:22.095 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:22.353 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:22.612 00:44:22.612 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:22.612 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:22.612 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:22.870 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:22.870 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:22.870 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.870 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:23.130 { 00:44:23.130 "cntlid": 123, 00:44:23.130 "qid": 0, 00:44:23.130 "state": "enabled", 00:44:23.130 "thread": "nvmf_tgt_poll_group_000", 00:44:23.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:23.130 "listen_address": { 00:44:23.130 "trtype": "TCP", 00:44:23.130 "adrfam": "IPv4", 00:44:23.130 "traddr": "10.0.0.3", 00:44:23.130 "trsvcid": "4420" 00:44:23.130 }, 00:44:23.130 "peer_address": { 00:44:23.130 "trtype": "TCP", 00:44:23.130 "adrfam": "IPv4", 00:44:23.130 "traddr": "10.0.0.1", 00:44:23.130 "trsvcid": "53742" 00:44:23.130 }, 00:44:23.130 "auth": { 00:44:23.130 "state": "completed", 00:44:23.130 "digest": "sha512", 00:44:23.130 "dhgroup": "ffdhe4096" 00:44:23.130 } 00:44:23.130 } 00:44:23.130 ]' 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:23.130 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:23.130 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:23.130 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:23.130 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:23.389 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:23.389 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:23.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:23.954 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:24.212 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:24.213 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:24.780 00:44:24.780 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:24.780 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:24.780 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:25.039 { 00:44:25.039 "cntlid": 125, 00:44:25.039 "qid": 0, 00:44:25.039 "state": "enabled", 00:44:25.039 "thread": "nvmf_tgt_poll_group_000", 00:44:25.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:25.039 "listen_address": { 00:44:25.039 "trtype": "TCP", 00:44:25.039 "adrfam": "IPv4", 00:44:25.039 "traddr": "10.0.0.3", 00:44:25.039 "trsvcid": "4420" 00:44:25.039 }, 00:44:25.039 "peer_address": { 00:44:25.039 "trtype": "TCP", 00:44:25.039 "adrfam": "IPv4", 00:44:25.039 "traddr": "10.0.0.1", 00:44:25.039 "trsvcid": "52736" 00:44:25.039 }, 00:44:25.039 "auth": { 00:44:25.039 "state": "completed", 00:44:25.039 "digest": "sha512", 00:44:25.039 "dhgroup": "ffdhe4096" 00:44:25.039 } 00:44:25.039 } 00:44:25.039 ]' 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:25.039 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:25.298 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:25.298 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:26.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:26.299 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:26.299 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:26.867 00:44:26.867 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:26.867 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:26.867 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:27.127 { 00:44:27.127 "cntlid": 127, 00:44:27.127 "qid": 0, 00:44:27.127 "state": "enabled", 00:44:27.127 "thread": "nvmf_tgt_poll_group_000", 00:44:27.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:27.127 "listen_address": { 00:44:27.127 "trtype": "TCP", 00:44:27.127 "adrfam": "IPv4", 00:44:27.127 "traddr": "10.0.0.3", 00:44:27.127 "trsvcid": "4420" 00:44:27.127 }, 00:44:27.127 "peer_address": { 00:44:27.127 "trtype": "TCP", 00:44:27.127 "adrfam": "IPv4", 00:44:27.127 "traddr": "10.0.0.1", 00:44:27.127 "trsvcid": "52760" 00:44:27.127 }, 00:44:27.127 "auth": { 00:44:27.127 "state": "completed", 00:44:27.127 "digest": "sha512", 00:44:27.127 "dhgroup": "ffdhe4096" 00:44:27.127 } 00:44:27.127 } 00:44:27.127 ]' 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:27.127 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:27.127 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:27.127 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:27.127 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:27.127 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:27.127 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:27.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:27.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:28.329 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:28.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:28.329 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:28.588 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:28.847 00:44:28.847 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:28.847 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:28.847 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:29.413 { 00:44:29.413 "cntlid": 129, 00:44:29.413 "qid": 0, 00:44:29.413 "state": "enabled", 00:44:29.413 "thread": "nvmf_tgt_poll_group_000", 00:44:29.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:29.413 "listen_address": { 00:44:29.413 "trtype": "TCP", 00:44:29.413 "adrfam": "IPv4", 00:44:29.413 "traddr": "10.0.0.3", 00:44:29.413 "trsvcid": "4420" 00:44:29.413 }, 00:44:29.413 "peer_address": { 00:44:29.413 "trtype": "TCP", 00:44:29.413 "adrfam": "IPv4", 00:44:29.413 "traddr": "10.0.0.1", 00:44:29.413 "trsvcid": "52784" 00:44:29.413 }, 00:44:29.413 "auth": { 00:44:29.413 "state": "completed", 00:44:29.413 "digest": "sha512", 00:44:29.413 "dhgroup": "ffdhe6144" 00:44:29.413 } 00:44:29.413 } 00:44:29.413 ]' 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:29.413 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:29.671 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:29.671 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:30.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:30.604 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:30.605 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:31.173 00:44:31.173 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:31.173 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:31.173 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:31.432 { 00:44:31.432 "cntlid": 131, 00:44:31.432 "qid": 0, 00:44:31.432 "state": "enabled", 00:44:31.432 "thread": "nvmf_tgt_poll_group_000", 00:44:31.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:31.432 "listen_address": { 00:44:31.432 "trtype": "TCP", 00:44:31.432 "adrfam": "IPv4", 00:44:31.432 "traddr": "10.0.0.3", 00:44:31.432 "trsvcid": "4420" 00:44:31.432 }, 00:44:31.432 "peer_address": { 00:44:31.432 "trtype": "TCP", 00:44:31.432 "adrfam": "IPv4", 00:44:31.432 "traddr": "10.0.0.1", 00:44:31.432 "trsvcid": "52810" 00:44:31.432 }, 00:44:31.432 "auth": { 00:44:31.432 "state": "completed", 00:44:31.432 "digest": "sha512", 00:44:31.432 "dhgroup": "ffdhe6144" 00:44:31.432 } 00:44:31.432 } 00:44:31.432 ]' 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:31.432 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:31.690 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:31.690 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:32.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:32.257 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:32.823 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:33.082 00:44:33.082 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:33.082 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:33.082 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:33.340 { 00:44:33.340 "cntlid": 133, 00:44:33.340 "qid": 0, 00:44:33.340 "state": "enabled", 00:44:33.340 "thread": "nvmf_tgt_poll_group_000", 00:44:33.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:33.340 "listen_address": { 00:44:33.340 "trtype": "TCP", 00:44:33.340 "adrfam": "IPv4", 00:44:33.340 "traddr": "10.0.0.3", 00:44:33.340 "trsvcid": "4420" 00:44:33.340 }, 00:44:33.340 "peer_address": { 00:44:33.340 "trtype": "TCP", 00:44:33.340 "adrfam": "IPv4", 00:44:33.340 "traddr": "10.0.0.1", 00:44:33.340 "trsvcid": "52844" 00:44:33.340 }, 00:44:33.340 "auth": { 00:44:33.340 "state": "completed", 00:44:33.340 "digest": "sha512", 00:44:33.340 "dhgroup": "ffdhe6144" 00:44:33.340 } 00:44:33.340 } 00:44:33.340 ]' 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:33.340 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:33.341 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:33.341 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:33.341 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:33.600 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:33.600 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:33.600 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:33.859 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:33.859 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:34.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:34.426 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:34.686 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:34.946 00:44:34.946 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:34.946 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:34.946 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:35.514 { 00:44:35.514 "cntlid": 135, 00:44:35.514 "qid": 0, 00:44:35.514 "state": "enabled", 00:44:35.514 "thread": "nvmf_tgt_poll_group_000", 00:44:35.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:35.514 "listen_address": { 00:44:35.514 "trtype": "TCP", 00:44:35.514 "adrfam": "IPv4", 00:44:35.514 "traddr": "10.0.0.3", 00:44:35.514 "trsvcid": "4420" 00:44:35.514 }, 00:44:35.514 "peer_address": { 00:44:35.514 "trtype": "TCP", 00:44:35.514 "adrfam": "IPv4", 00:44:35.514 "traddr": "10.0.0.1", 00:44:35.514 "trsvcid": "40214" 00:44:35.514 }, 00:44:35.514 "auth": { 00:44:35.514 "state": "completed", 00:44:35.514 "digest": "sha512", 00:44:35.514 "dhgroup": "ffdhe6144" 00:44:35.514 } 00:44:35.514 } 00:44:35.514 ]' 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:35.514 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:35.515 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:35.515 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:35.774 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:35.774 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:36.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:36.378 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:36.637 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:37.205 00:44:37.205 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:37.205 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:37.205 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:37.463 { 00:44:37.463 "cntlid": 137, 00:44:37.463 "qid": 0, 00:44:37.463 "state": "enabled", 00:44:37.463 "thread": "nvmf_tgt_poll_group_000", 00:44:37.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:37.463 "listen_address": { 00:44:37.463 "trtype": "TCP", 00:44:37.463 "adrfam": "IPv4", 00:44:37.463 "traddr": "10.0.0.3", 00:44:37.463 "trsvcid": "4420" 00:44:37.463 }, 00:44:37.463 "peer_address": { 00:44:37.463 "trtype": "TCP", 00:44:37.463 "adrfam": "IPv4", 00:44:37.463 "traddr": "10.0.0.1", 00:44:37.463 "trsvcid": "40234" 00:44:37.463 }, 00:44:37.463 "auth": { 00:44:37.463 "state": "completed", 00:44:37.463 "digest": "sha512", 00:44:37.463 "dhgroup": "ffdhe8192" 00:44:37.463 } 00:44:37.463 } 00:44:37.463 ]' 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:37.463 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:37.721 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:37.722 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:37.722 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:37.722 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:37.722 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:38.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:38.656 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:38.914 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.915 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:39.481 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.481 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:39.739 { 00:44:39.739 "cntlid": 139, 00:44:39.739 "qid": 0, 00:44:39.739 "state": "enabled", 00:44:39.739 "thread": "nvmf_tgt_poll_group_000", 00:44:39.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:39.739 "listen_address": { 00:44:39.739 "trtype": "TCP", 00:44:39.739 "adrfam": "IPv4", 00:44:39.739 "traddr": "10.0.0.3", 00:44:39.739 "trsvcid": "4420" 00:44:39.739 }, 00:44:39.739 "peer_address": { 00:44:39.739 "trtype": "TCP", 00:44:39.739 "adrfam": "IPv4", 00:44:39.739 "traddr": "10.0.0.1", 00:44:39.739 "trsvcid": "40266" 00:44:39.739 }, 00:44:39.739 "auth": { 00:44:39.739 "state": "completed", 00:44:39.739 "digest": "sha512", 00:44:39.739 "dhgroup": "ffdhe8192" 00:44:39.739 } 00:44:39.739 } 00:44:39.739 ]' 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:39.739 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:39.998 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:39.998 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: --dhchap-ctrl-secret DHHC-1:02:ZDg2NmU2MzBiYTE1YjQzMGI1MWM2MWM2MjJhYmIzNzZiZTQ4YTVhYTRkYmIwNTQy/i7I7A==: 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:40.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:40.564 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.822 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:41.388 00:44:41.388 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:41.388 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:41.388 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:41.646 { 00:44:41.646 "cntlid": 141, 00:44:41.646 "qid": 0, 00:44:41.646 "state": "enabled", 00:44:41.646 "thread": "nvmf_tgt_poll_group_000", 00:44:41.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:41.646 "listen_address": { 00:44:41.646 "trtype": "TCP", 00:44:41.646 "adrfam": "IPv4", 00:44:41.646 "traddr": "10.0.0.3", 00:44:41.646 "trsvcid": "4420" 00:44:41.646 }, 00:44:41.646 "peer_address": { 00:44:41.646 "trtype": "TCP", 00:44:41.646 "adrfam": "IPv4", 00:44:41.646 "traddr": "10.0.0.1", 00:44:41.646 "trsvcid": "40294" 00:44:41.646 }, 00:44:41.646 "auth": { 00:44:41.646 "state": "completed", 00:44:41.646 "digest": "sha512", 00:44:41.646 "dhgroup": "ffdhe8192" 00:44:41.646 } 00:44:41.646 } 00:44:41.646 ]' 00:44:41.646 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:41.904 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:42.163 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:42.163 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:01:OWM4N2ZkMWEzY2M0ZGY5MWVmZTczNzIxNWIxMmVkZmTshHtX: 00:44:42.729 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:42.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:42.729 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:42.729 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.729 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:42.988 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:43.922 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:43.922 { 00:44:43.922 "cntlid": 143, 00:44:43.922 "qid": 0, 00:44:43.922 "state": "enabled", 00:44:43.922 "thread": "nvmf_tgt_poll_group_000", 00:44:43.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:43.922 "listen_address": { 00:44:43.922 "trtype": "TCP", 00:44:43.922 "adrfam": "IPv4", 00:44:43.922 "traddr": "10.0.0.3", 00:44:43.922 "trsvcid": "4420" 00:44:43.922 }, 00:44:43.922 "peer_address": { 00:44:43.922 "trtype": "TCP", 00:44:43.922 "adrfam": "IPv4", 00:44:43.922 "traddr": "10.0.0.1", 00:44:43.922 "trsvcid": "40316" 00:44:43.922 }, 00:44:43.922 "auth": { 00:44:43.922 "state": "completed", 00:44:43.922 "digest": "sha512", 00:44:43.922 "dhgroup": "ffdhe8192" 00:44:43.922 } 00:44:43.922 } 00:44:43.922 ]' 00:44:43.922 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:44.181 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:44.181 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:44.181 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:44.181 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:44.181 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:44.181 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:44.181 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:44.440 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:44.440 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:45.376 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:45.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.376 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.943 00:44:46.203 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:46.203 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:46.203 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:46.462 { 00:44:46.462 "cntlid": 145, 00:44:46.462 "qid": 0, 00:44:46.462 "state": "enabled", 00:44:46.462 "thread": "nvmf_tgt_poll_group_000", 00:44:46.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:46.462 "listen_address": { 00:44:46.462 "trtype": "TCP", 00:44:46.462 "adrfam": "IPv4", 00:44:46.462 "traddr": "10.0.0.3", 00:44:46.462 "trsvcid": "4420" 00:44:46.462 }, 00:44:46.462 "peer_address": { 00:44:46.462 "trtype": "TCP", 00:44:46.462 "adrfam": "IPv4", 00:44:46.462 "traddr": "10.0.0.1", 00:44:46.462 "trsvcid": "55284" 00:44:46.462 }, 00:44:46.462 "auth": { 00:44:46.462 "state": "completed", 00:44:46.462 "digest": "sha512", 00:44:46.462 "dhgroup": "ffdhe8192" 00:44:46.462 } 00:44:46.462 } 00:44:46.462 ]' 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:46.462 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:46.722 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:46.722 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:00:ZDQ2MDU1MTlmYmFjN2I2MjE2NGYwYjZiM2Y3MDhkZDNjMWI5NzZmNDliNjJlZmU1AoZpIg==: --dhchap-ctrl-secret DHHC-1:03:Y2NhMDFiZGQ4OWQ4NmRhMDI5NDhmNTAzMjNkNGZkMWU4N2I1YTI1MjJjYjI4MmM5ZDc1ZmUyN2Y2MzdlMjkzZM56l18=: 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:47.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:47.656 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:44:47.657 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:44:48.226 request: 00:44:48.226 { 00:44:48.226 "name": "nvme0", 00:44:48.226 "trtype": "tcp", 00:44:48.226 "traddr": "10.0.0.3", 00:44:48.226 "adrfam": "ipv4", 00:44:48.226 "trsvcid": "4420", 00:44:48.226 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:48.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:48.226 "prchk_reftag": false, 00:44:48.226 "prchk_guard": false, 00:44:48.226 "hdgst": false, 00:44:48.226 "ddgst": false, 00:44:48.226 "dhchap_key": "key2", 00:44:48.226 "allow_unrecognized_csi": false, 00:44:48.226 "method": "bdev_nvme_attach_controller", 00:44:48.226 "req_id": 1 00:44:48.226 } 00:44:48.226 Got JSON-RPC error response 00:44:48.226 response: 00:44:48.226 { 00:44:48.226 "code": -5, 00:44:48.226 "message": "Input/output error" 00:44:48.226 } 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:44:48.226 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:44:48.792 request: 00:44:48.792 { 00:44:48.792 "name": "nvme0", 00:44:48.792 "trtype": "tcp", 00:44:48.792 "traddr": "10.0.0.3", 00:44:48.792 "adrfam": "ipv4", 00:44:48.792 "trsvcid": "4420", 00:44:48.792 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:48.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:48.792 "prchk_reftag": false, 00:44:48.792 "prchk_guard": false, 00:44:48.792 "hdgst": false, 00:44:48.792 "ddgst": false, 00:44:48.792 "dhchap_key": "key1", 00:44:48.792 "dhchap_ctrlr_key": "ckey2", 00:44:48.792 "allow_unrecognized_csi": false, 00:44:48.792 "method": "bdev_nvme_attach_controller", 00:44:48.792 "req_id": 1 00:44:48.792 } 00:44:48.792 Got JSON-RPC error response 00:44:48.792 response: 00:44:48.792 { 00:44:48.792 "code": -5, 00:44:48.792 "message": "Input/output error" 00:44:48.792 } 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:48.792 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:49.050 request: 00:44:49.050 { 00:44:49.050 "name": "nvme0", 00:44:49.050 "trtype": "tcp", 00:44:49.050 "traddr": "10.0.0.3", 00:44:49.050 "adrfam": "ipv4", 00:44:49.050 "trsvcid": "4420", 00:44:49.050 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:49.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:49.050 "prchk_reftag": false, 00:44:49.050 "prchk_guard": false, 00:44:49.050 "hdgst": false, 00:44:49.050 "ddgst": false, 00:44:49.050 "dhchap_key": "key1", 00:44:49.050 "dhchap_ctrlr_key": "ckey1", 00:44:49.050 "allow_unrecognized_csi": false, 00:44:49.050 "method": "bdev_nvme_attach_controller", 00:44:49.050 "req_id": 1 00:44:49.050 } 00:44:49.050 Got JSON-RPC error response 00:44:49.050 response: 00:44:49.050 { 00:44:49.050 "code": -5, 00:44:49.050 "message": "Input/output error" 00:44:49.050 } 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67012 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67012 ']' 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67012 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67012 00:44:49.308 killing process with pid 67012 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67012' 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67012 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67012 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70104 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70104 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70104 ']' 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:49.308 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:50.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70104 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70104 ']' 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.684 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.684 null0 00:44:50.943 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.943 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:44:50.943 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.a3x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.5E0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5E0 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iZx 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.MEF ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MEF 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.96c 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ctc ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ctc 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.D7Y 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:50.944 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:51.878 nvme0n1 00:44:51.878 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:51.878 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:51.878 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:52.137 { 00:44:52.137 "cntlid": 1, 00:44:52.137 "qid": 0, 00:44:52.137 "state": "enabled", 00:44:52.137 "thread": "nvmf_tgt_poll_group_000", 00:44:52.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:52.137 "listen_address": { 00:44:52.137 "trtype": "TCP", 00:44:52.137 "adrfam": "IPv4", 00:44:52.137 "traddr": "10.0.0.3", 00:44:52.137 "trsvcid": "4420" 00:44:52.137 }, 00:44:52.137 "peer_address": { 00:44:52.137 "trtype": "TCP", 00:44:52.137 "adrfam": "IPv4", 00:44:52.137 "traddr": "10.0.0.1", 00:44:52.137 "trsvcid": "55334" 00:44:52.137 }, 00:44:52.137 "auth": { 00:44:52.137 "state": "completed", 00:44:52.137 "digest": "sha512", 00:44:52.137 "dhgroup": "ffdhe8192" 00:44:52.137 } 00:44:52.137 } 00:44:52.137 ]' 00:44:52.137 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:52.137 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:52.137 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:52.137 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:52.137 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:52.396 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:52.396 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:52.396 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:52.654 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:52.654 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:53.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key3 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:44:53.222 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:44:53.480 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:53.481 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:54.047 request: 00:44:54.047 { 00:44:54.047 "name": "nvme0", 00:44:54.047 "trtype": "tcp", 00:44:54.047 "traddr": "10.0.0.3", 00:44:54.047 "adrfam": "ipv4", 00:44:54.047 "trsvcid": "4420", 00:44:54.047 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:54.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:54.047 "prchk_reftag": false, 00:44:54.047 "prchk_guard": false, 00:44:54.047 "hdgst": false, 00:44:54.047 "ddgst": false, 00:44:54.047 "dhchap_key": "key3", 00:44:54.047 "allow_unrecognized_csi": false, 00:44:54.047 "method": "bdev_nvme_attach_controller", 00:44:54.047 "req_id": 1 00:44:54.047 } 00:44:54.047 Got JSON-RPC error response 00:44:54.047 response: 00:44:54.047 { 00:44:54.047 "code": -5, 00:44:54.047 "message": "Input/output error" 00:44:54.047 } 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:44:54.047 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:54.047 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:54.614 request: 00:44:54.614 { 00:44:54.614 "name": "nvme0", 00:44:54.614 "trtype": "tcp", 00:44:54.614 "traddr": "10.0.0.3", 00:44:54.614 "adrfam": "ipv4", 00:44:54.614 "trsvcid": "4420", 00:44:54.614 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:54.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:54.614 "prchk_reftag": false, 00:44:54.614 "prchk_guard": false, 00:44:54.614 "hdgst": false, 00:44:54.614 "ddgst": false, 00:44:54.614 "dhchap_key": "key3", 00:44:54.614 "allow_unrecognized_csi": false, 00:44:54.614 "method": "bdev_nvme_attach_controller", 00:44:54.614 "req_id": 1 00:44:54.614 } 00:44:54.614 Got JSON-RPC error response 00:44:54.614 response: 00:44:54.614 { 00:44:54.614 "code": -5, 00:44:54.614 "message": "Input/output error" 00:44:54.614 } 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:44:54.614 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:44:54.872 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:44:55.131 request: 00:44:55.131 { 00:44:55.131 "name": "nvme0", 00:44:55.131 "trtype": "tcp", 00:44:55.131 "traddr": "10.0.0.3", 00:44:55.131 "adrfam": "ipv4", 00:44:55.131 "trsvcid": "4420", 00:44:55.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:55.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:55.131 "prchk_reftag": false, 00:44:55.131 "prchk_guard": false, 00:44:55.131 "hdgst": false, 00:44:55.131 "ddgst": false, 00:44:55.131 "dhchap_key": "key0", 00:44:55.131 "dhchap_ctrlr_key": "key1", 00:44:55.131 "allow_unrecognized_csi": false, 00:44:55.131 "method": "bdev_nvme_attach_controller", 00:44:55.131 "req_id": 1 00:44:55.131 } 00:44:55.131 Got JSON-RPC error response 00:44:55.131 response: 00:44:55.131 { 00:44:55.131 "code": -5, 00:44:55.131 "message": "Input/output error" 00:44:55.131 } 00:44:55.131 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:55.131 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:55.131 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:55.131 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:55.131 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:44:55.132 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:44:55.132 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:44:55.391 nvme0n1 00:44:55.391 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:44:55.391 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:55.391 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:44:55.960 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:44:56.897 nvme0n1 00:44:56.897 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:44:56.897 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:56.897 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key key3 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:44:57.156 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:57.415 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:57.415 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:57.415 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid 9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -l 0 --dhchap-secret DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: --dhchap-ctrl-secret DHHC-1:03:YTM3NTgyZGQ1ZTYyNmNjYTlmNDVlZjlkMjZjYzE4YTU1ZTVhZjI1ZWZiNjMzOGJmOGZhMjEyZWYyMmM5NmExNZSkQ9o=: 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:57.983 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:58.244 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:44:58.244 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:44:58.244 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:44:58.244 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:44:58.244 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:58.244 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:44:58.245 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:58.245 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:44:58.245 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:44:58.245 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:44:58.813 request: 00:44:58.813 { 00:44:58.813 "name": "nvme0", 00:44:58.813 "trtype": "tcp", 00:44:58.813 "traddr": "10.0.0.3", 00:44:58.813 "adrfam": "ipv4", 00:44:58.813 "trsvcid": "4420", 00:44:58.813 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:44:58.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535", 00:44:58.813 "prchk_reftag": false, 00:44:58.813 "prchk_guard": false, 00:44:58.813 "hdgst": false, 00:44:58.813 "ddgst": false, 00:44:58.813 "dhchap_key": "key1", 00:44:58.813 "allow_unrecognized_csi": false, 00:44:58.813 "method": "bdev_nvme_attach_controller", 00:44:58.813 "req_id": 1 00:44:58.813 } 00:44:58.813 Got JSON-RPC error response 00:44:58.813 response: 00:44:58.813 { 00:44:58.813 "code": -5, 00:44:58.813 "message": "Input/output error" 00:44:58.813 } 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:44:58.813 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:44:59.750 nvme0n1 00:44:59.750 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:44:59.750 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:44:59.750 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:00.009 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:00.009 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:00.009 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:45:00.268 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:45:00.527 nvme0n1 00:45:00.527 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:45:00.527 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:00.527 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:45:00.786 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:00.786 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:00.786 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: '' 2s 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: ]] 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWQ5Mjg3MjRiZmEzZmE4NDBhODZkZmM3NWNmYTkyMjZguSSm: 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:45:01.044 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key1 --dhchap-ctrlr-key key2 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: 2s 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: ]] 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWEwYjIzNjNiZWJlN2M4OWY0NWY2MzFjZDhlN2M1NWI3MzkyOTQwZDBlMDllZmU06r30fA==: 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:45:03.590 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:05.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:05.490 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:06.059 nvme0n1 00:45:06.059 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:06.059 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:06.059 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:06.059 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:06.059 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:06.059 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:06.627 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:45:06.627 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:45:06.627 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:06.886 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.886 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:45:06.886 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:06.886 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.145 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.145 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:45:07.145 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:45:07.145 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:45:07.145 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:45:07.145 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:07.404 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:07.973 request: 00:45:07.973 { 00:45:07.973 "name": "nvme0", 00:45:07.973 "dhchap_key": "key1", 00:45:07.973 "dhchap_ctrlr_key": "key3", 00:45:07.973 "method": "bdev_nvme_set_keys", 00:45:07.973 "req_id": 1 00:45:07.973 } 00:45:07.973 Got JSON-RPC error response 00:45:07.973 response: 00:45:07.973 { 00:45:07.973 "code": -13, 00:45:07.973 "message": "Permission denied" 00:45:07.973 } 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:45:07.973 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:08.240 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:45:08.240 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:09.619 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:10.556 nvme0n1 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:10.556 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:11.124 request: 00:45:11.124 { 00:45:11.124 "name": "nvme0", 00:45:11.124 "dhchap_key": "key2", 00:45:11.124 "dhchap_ctrlr_key": "key0", 00:45:11.124 "method": "bdev_nvme_set_keys", 00:45:11.124 "req_id": 1 00:45:11.124 } 00:45:11.124 Got JSON-RPC error response 00:45:11.124 response: 00:45:11.124 { 00:45:11.124 "code": -13, 00:45:11.124 "message": "Permission denied" 00:45:11.124 } 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:11.124 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:45:11.382 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:45:11.382 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:45:12.318 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:45:12.318 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:45:12.318 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67031 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67031 ']' 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67031 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67031 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:12.577 killing process with pid 67031 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67031' 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67031 00:45:12.577 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67031 00:45:12.836 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:45:12.836 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:12.836 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:13.095 rmmod nvme_tcp 00:45:13.095 rmmod nvme_fabrics 00:45:13.095 rmmod nvme_keyring 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70104 ']' 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70104 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70104 ']' 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70104 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70104 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:13.095 killing process with pid 70104 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70104' 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70104 00:45:13.095 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70104 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.a3x /tmp/spdk.key-sha256.iZx /tmp/spdk.key-sha384.96c /tmp/spdk.key-sha512.D7Y /tmp/spdk.key-sha512.5E0 /tmp/spdk.key-sha384.MEF /tmp/spdk.key-sha256.ctc '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:45:13.353 00:45:13.353 real 3m11.156s 00:45:13.353 user 7m39.366s 00:45:13.353 sys 0m28.058s 00:45:13.353 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.612 ************************************ 00:45:13.612 END TEST nvmf_auth_target 00:45:13.612 ************************************ 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # [[ tcp == \t\c\p ]] 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:13.612 ************************************ 00:45:13.612 START TEST nvmf_bdevio_no_huge 00:45:13.612 ************************************ 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:45:13.612 * Looking for test storage... 00:45:13.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:45:13.612 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:13.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.613 --rc genhtml_branch_coverage=1 00:45:13.613 --rc genhtml_function_coverage=1 00:45:13.613 --rc genhtml_legend=1 00:45:13.613 --rc geninfo_all_blocks=1 00:45:13.613 --rc geninfo_unexecuted_blocks=1 00:45:13.613 00:45:13.613 ' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:13.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.613 --rc genhtml_branch_coverage=1 00:45:13.613 --rc genhtml_function_coverage=1 00:45:13.613 --rc genhtml_legend=1 00:45:13.613 --rc geninfo_all_blocks=1 00:45:13.613 --rc geninfo_unexecuted_blocks=1 00:45:13.613 00:45:13.613 ' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:13.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.613 --rc genhtml_branch_coverage=1 00:45:13.613 --rc genhtml_function_coverage=1 00:45:13.613 --rc genhtml_legend=1 00:45:13.613 --rc geninfo_all_blocks=1 00:45:13.613 --rc geninfo_unexecuted_blocks=1 00:45:13.613 00:45:13.613 ' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:13.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.613 --rc genhtml_branch_coverage=1 00:45:13.613 --rc genhtml_function_coverage=1 00:45:13.613 --rc genhtml_legend=1 00:45:13.613 --rc geninfo_all_blocks=1 00:45:13.613 --rc geninfo_unexecuted_blocks=1 00:45:13.613 00:45:13.613 ' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:13.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:13.613 Cannot find device "nvmf_init_br" 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:45:13.613 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:13.871 Cannot find device "nvmf_init_br2" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:13.872 Cannot find device "nvmf_tgt_br" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:13.872 Cannot find device "nvmf_tgt_br2" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:13.872 Cannot find device "nvmf_init_br" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:13.872 Cannot find device "nvmf_init_br2" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:13.872 Cannot find device "nvmf_tgt_br" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:13.872 Cannot find device "nvmf_tgt_br2" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:13.872 Cannot find device "nvmf_br" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:13.872 Cannot find device "nvmf_init_if" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:13.872 Cannot find device "nvmf_init_if2" 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:13.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:13.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:13.872 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:14.130 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:14.130 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:14.130 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:14.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:14.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:45:14.131 00:45:14.131 --- 10.0.0.3 ping statistics --- 00:45:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.131 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:14.131 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:14.131 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:45:14.131 00:45:14.131 --- 10.0.0.4 ping statistics --- 00:45:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.131 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:14.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:14.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:45:14.131 00:45:14.131 --- 10.0.0.1 ping statistics --- 00:45:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.131 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:14.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:14.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:45:14.131 00:45:14.131 --- 10.0.0.2 ping statistics --- 00:45:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.131 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70735 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70735 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70735 ']' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:14.131 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:14.131 [2024-12-09 05:36:01.020653] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:14.131 [2024-12-09 05:36:01.020936] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:45:14.702 [2024-12-09 05:36:01.377486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:14.702 [2024-12-09 05:36:01.448087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:14.702 [2024-12-09 05:36:01.448135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:14.702 [2024-12-09 05:36:01.448161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:14.702 [2024-12-09 05:36:01.448167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:14.702 [2024-12-09 05:36:01.448173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:14.702 [2024-12-09 05:36:01.448634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:45:14.702 [2024-12-09 05:36:01.449171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:45:14.702 [2024-12-09 05:36:01.449300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:45:14.702 [2024-12-09 05:36:01.449307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:14.702 [2024-12-09 05:36:01.454872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:15.269 [2024-12-09 05:36:02.102488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:15.269 Malloc0 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:15.269 [2024-12-09 05:36:02.150727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:15.269 { 00:45:15.269 "params": { 00:45:15.269 "name": "Nvme$subsystem", 00:45:15.269 "trtype": "$TEST_TRANSPORT", 00:45:15.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:15.269 "adrfam": "ipv4", 00:45:15.269 "trsvcid": "$NVMF_PORT", 00:45:15.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:15.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:15.269 "hdgst": ${hdgst:-false}, 00:45:15.269 "ddgst": ${ddgst:-false} 00:45:15.269 }, 00:45:15.269 "method": "bdev_nvme_attach_controller" 00:45:15.269 } 00:45:15.269 EOF 00:45:15.269 )") 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:45:15.269 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:15.269 "params": { 00:45:15.269 "name": "Nvme1", 00:45:15.269 "trtype": "tcp", 00:45:15.269 "traddr": "10.0.0.3", 00:45:15.269 "adrfam": "ipv4", 00:45:15.269 "trsvcid": "4420", 00:45:15.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:15.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:15.269 "hdgst": false, 00:45:15.269 "ddgst": false 00:45:15.269 }, 00:45:15.269 "method": "bdev_nvme_attach_controller" 00:45:15.269 }' 00:45:15.269 [2024-12-09 05:36:02.219147] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:15.269 [2024-12-09 05:36:02.219254] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70771 ] 00:45:15.836 [2024-12-09 05:36:02.591006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:15.836 [2024-12-09 05:36:02.667141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:15.836 [2024-12-09 05:36:02.667321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:15.836 [2024-12-09 05:36:02.667330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.836 [2024-12-09 05:36:02.680983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:16.094 I/O targets: 00:45:16.094 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:45:16.094 00:45:16.094 00:45:16.094 CUnit - A unit testing framework for C - Version 2.1-3 00:45:16.094 http://cunit.sourceforge.net/ 00:45:16.094 00:45:16.094 00:45:16.094 Suite: bdevio tests on: Nvme1n1 00:45:16.094 Test: blockdev write read block ...passed 00:45:16.094 Test: blockdev write zeroes read block ...passed 00:45:16.094 Test: blockdev write zeroes read no split ...passed 00:45:16.094 Test: blockdev write zeroes read split ...passed 00:45:16.094 Test: blockdev write zeroes read split partial ...passed 00:45:16.094 Test: blockdev reset ...[2024-12-09 05:36:02.885240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:45:16.094 [2024-12-09 05:36:02.885639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110b320 (9): Bad file descriptor 00:45:16.094 [2024-12-09 05:36:02.896407] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:45:16.094 passed 00:45:16.094 Test: blockdev write read 8 blocks ...passed 00:45:16.094 Test: blockdev write read size > 128k ...passed 00:45:16.094 Test: blockdev write read invalid size ...passed 00:45:16.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:16.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:16.094 Test: blockdev write read max offset ...passed 00:45:16.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:16.094 Test: blockdev writev readv 8 blocks ...passed 00:45:16.094 Test: blockdev writev readv 30 x 1block ...passed 00:45:16.094 Test: blockdev writev readv block ...passed 00:45:16.094 Test: blockdev writev readv size > 128k ...passed 00:45:16.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:16.094 Test: blockdev comparev and writev ...[2024-12-09 05:36:02.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.906190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.906230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.906557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.906595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.906627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.906637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.907061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.907089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.907107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.907117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.907433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.907453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:16.094 [2024-12-09 05:36:02.907470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:16.094 [2024-12-09 05:36:02.907479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:45:16.094 passed 00:45:16.094 Test: blockdev nvme passthru rw ...passed 00:45:16.094 Test: blockdev nvme passthru vendor specific ...passed 00:45:16.094 Test: blockdev nvme admin passthru ...[2024-12-09 05:36:02.908443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:16.095 [2024-12-09 05:36:02.908475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:45:16.095 [2024-12-09 05:36:02.908596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:16.095 [2024-12-09 05:36:02.908618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:45:16.095 [2024-12-09 05:36:02.908736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:16.095 [2024-12-09 05:36:02.908757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:45:16.095 [2024-12-09 05:36:02.908899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:16.095 [2024-12-09 05:36:02.908918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:45:16.095 passed 00:45:16.095 Test: blockdev copy ...passed 00:45:16.095 00:45:16.095 Run Summary: Type Total Ran Passed Failed Inactive 00:45:16.095 suites 1 1 n/a 0 0 00:45:16.095 tests 23 23 23 0 0 00:45:16.095 asserts 152 152 152 0 n/a 00:45:16.095 00:45:16.095 Elapsed time = 0.156 seconds 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:16.353 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:16.612 rmmod nvme_tcp 00:45:16.612 rmmod nvme_fabrics 00:45:16.612 rmmod nvme_keyring 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70735 ']' 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70735 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70735 ']' 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70735 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70735 00:45:16.612 killing process with pid 70735 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70735' 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70735 00:45:16.612 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70735 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:16.871 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:17.130 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:45:17.130 00:45:17.130 real 0m3.683s 00:45:17.130 user 0m11.081s 00:45:17.130 sys 0m1.259s 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:17.130 ************************************ 00:45:17.130 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:17.130 END TEST nvmf_bdevio_no_huge 00:45:17.130 ************************************ 00:45:17.390 05:36:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # '[' tcp = tcp ']' 00:45:17.390 05:36:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:45:17.390 05:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:17.391 ************************************ 00:45:17.391 START TEST nvmf_tls 00:45:17.391 ************************************ 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:45:17.391 * Looking for test storage... 00:45:17.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:17.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:17.391 --rc genhtml_branch_coverage=1 00:45:17.391 --rc genhtml_function_coverage=1 00:45:17.391 --rc genhtml_legend=1 00:45:17.391 --rc geninfo_all_blocks=1 00:45:17.391 --rc geninfo_unexecuted_blocks=1 00:45:17.391 00:45:17.391 ' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:17.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:17.391 --rc genhtml_branch_coverage=1 00:45:17.391 --rc genhtml_function_coverage=1 00:45:17.391 --rc genhtml_legend=1 00:45:17.391 --rc geninfo_all_blocks=1 00:45:17.391 --rc geninfo_unexecuted_blocks=1 00:45:17.391 00:45:17.391 ' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:17.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:17.391 --rc genhtml_branch_coverage=1 00:45:17.391 --rc genhtml_function_coverage=1 00:45:17.391 --rc genhtml_legend=1 00:45:17.391 --rc geninfo_all_blocks=1 00:45:17.391 --rc geninfo_unexecuted_blocks=1 00:45:17.391 00:45:17.391 ' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:17.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:17.391 --rc genhtml_branch_coverage=1 00:45:17.391 --rc genhtml_function_coverage=1 00:45:17.391 --rc genhtml_legend=1 00:45:17.391 --rc geninfo_all_blocks=1 00:45:17.391 --rc geninfo_unexecuted_blocks=1 00:45:17.391 00:45:17.391 ' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:17.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:17.391 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:17.392 Cannot find device "nvmf_init_br" 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:17.392 Cannot find device "nvmf_init_br2" 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:17.392 Cannot find device "nvmf_tgt_br" 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:17.392 Cannot find device "nvmf_tgt_br2" 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:45:17.392 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:17.651 Cannot find device "nvmf_init_br" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:17.651 Cannot find device "nvmf_init_br2" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:17.651 Cannot find device "nvmf_tgt_br" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:17.651 Cannot find device "nvmf_tgt_br2" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:17.651 Cannot find device "nvmf_br" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:17.651 Cannot find device "nvmf_init_if" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:17.651 Cannot find device "nvmf_init_if2" 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:17.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:17.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:17.651 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:17.652 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:17.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:17.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:45:17.911 00:45:17.911 --- 10.0.0.3 ping statistics --- 00:45:17.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:17.911 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:17.911 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:17.911 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:45:17.911 00:45:17.911 --- 10.0.0.4 ping statistics --- 00:45:17.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:17.911 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:17.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:17.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:45:17.911 00:45:17.911 --- 10.0.0.1 ping statistics --- 00:45:17.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:17.911 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:17.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:17.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:45:17.911 00:45:17.911 --- 10.0.0.2 ping statistics --- 00:45:17.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:17.911 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71010 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:45:17.911 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71010 00:45:17.912 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71010 ']' 00:45:17.912 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:17.912 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:17.912 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:17.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:17.912 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:17.912 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:17.912 [2024-12-09 05:36:04.767347] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:17.912 [2024-12-09 05:36:04.767434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:18.171 [2024-12-09 05:36:04.924137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:18.171 [2024-12-09 05:36:04.963372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:18.171 [2024-12-09 05:36:04.963432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:18.171 [2024-12-09 05:36:04.963447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:18.171 [2024-12-09 05:36:04.963457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:18.171 [2024-12-09 05:36:04.963466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:18.171 [2024-12-09 05:36:04.963861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:45:18.171 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:45:18.437 true 00:45:18.437 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:18.437 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:45:18.699 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:45:18.699 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:45:18.699 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:45:18.957 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:18.957 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:45:19.216 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:45:19.216 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:45:19.216 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:45:19.474 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:45:19.474 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:19.733 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:45:19.733 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:45:19.733 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:19.733 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:45:19.992 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:45:19.992 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:45:19.992 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:45:20.269 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:20.269 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:45:20.528 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:45:20.528 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:45:20.528 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:45:20.787 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:20.788 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.FuAGoozGRN 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lBxTV8yP4i 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.FuAGoozGRN 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lBxTV8yP4i 00:45:21.047 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:45:21.306 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:45:21.566 [2024-12-09 05:36:08.395776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:21.566 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.FuAGoozGRN 00:45:21.566 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FuAGoozGRN 00:45:21.566 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:45:21.825 [2024-12-09 05:36:08.629585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:21.825 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:45:22.084 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:45:22.344 [2024-12-09 05:36:09.105675] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:22.344 [2024-12-09 05:36:09.105889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:22.344 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:45:22.603 malloc0 00:45:22.603 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:22.603 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FuAGoozGRN 00:45:23.171 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:45:23.171 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FuAGoozGRN 00:45:35.457 Initializing NVMe Controllers 00:45:35.458 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:45:35.458 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:35.458 Initialization complete. Launching workers. 00:45:35.458 ======================================================== 00:45:35.458 Latency(us) 00:45:35.458 Device Information : IOPS MiB/s Average min max 00:45:35.458 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10738.57 41.95 5961.13 929.68 7935.72 00:45:35.458 ======================================================== 00:45:35.458 Total : 10738.57 41.95 5961.13 929.68 7935.72 00:45:35.458 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FuAGoozGRN 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FuAGoozGRN 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71237 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71237 /var/tmp/bdevperf.sock 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71237 ']' 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:35.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:35.458 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:35.458 [2024-12-09 05:36:20.355902] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:35.458 [2024-12-09 05:36:20.356005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71237 ] 00:45:35.458 [2024-12-09 05:36:20.513784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:35.458 [2024-12-09 05:36:20.552469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:35.458 [2024-12-09 05:36:20.585620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:35.458 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:35.458 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:35.458 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FuAGoozGRN 00:45:35.458 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:45:35.458 [2024-12-09 05:36:21.808508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:35.458 TLSTESTn1 00:45:35.458 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:45:35.458 Running I/O for 10 seconds... 00:45:37.382 4480.00 IOPS, 17.50 MiB/s [2024-12-09T05:36:25.286Z] 4525.50 IOPS, 17.68 MiB/s [2024-12-09T05:36:26.221Z] 4523.33 IOPS, 17.67 MiB/s [2024-12-09T05:36:27.155Z] 4534.25 IOPS, 17.71 MiB/s [2024-12-09T05:36:28.090Z] 4531.20 IOPS, 17.70 MiB/s [2024-12-09T05:36:29.025Z] 4531.17 IOPS, 17.70 MiB/s [2024-12-09T05:36:30.402Z] 4534.86 IOPS, 17.71 MiB/s [2024-12-09T05:36:31.336Z] 4534.75 IOPS, 17.71 MiB/s [2024-12-09T05:36:32.272Z] 4554.00 IOPS, 17.79 MiB/s [2024-12-09T05:36:32.272Z] 4573.10 IOPS, 17.86 MiB/s 00:45:45.300 Latency(us) 00:45:45.300 [2024-12-09T05:36:32.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:45.300 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:45:45.300 Verification LBA range: start 0x0 length 0x2000 00:45:45.300 TLSTESTn1 : 10.01 4578.87 17.89 0.00 0.00 27906.44 5451.40 20852.36 00:45:45.300 [2024-12-09T05:36:32.272Z] =================================================================================================================== 00:45:45.300 [2024-12-09T05:36:32.272Z] Total : 4578.87 17.89 0.00 0.00 27906.44 5451.40 20852.36 00:45:45.300 { 00:45:45.300 "results": [ 00:45:45.300 { 00:45:45.300 "job": "TLSTESTn1", 00:45:45.300 "core_mask": "0x4", 00:45:45.300 "workload": "verify", 00:45:45.300 "status": "finished", 00:45:45.300 "verify_range": { 00:45:45.300 "start": 0, 00:45:45.300 "length": 8192 00:45:45.300 }, 00:45:45.300 "queue_depth": 128, 00:45:45.300 "io_size": 4096, 00:45:45.300 "runtime": 10.014909, 00:45:45.300 "iops": 4578.873357710989, 00:45:45.300 "mibps": 17.88622405355855, 00:45:45.300 "io_failed": 0, 00:45:45.300 "io_timeout": 0, 00:45:45.300 "avg_latency_us": 27906.439786926552, 00:45:45.300 "min_latency_us": 5451.403636363636, 00:45:45.300 "max_latency_us": 20852.363636363636 00:45:45.300 } 00:45:45.300 ], 00:45:45.301 "core_count": 1 00:45:45.301 } 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71237 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71237 ']' 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71237 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71237 00:45:45.301 killing process with pid 71237 00:45:45.301 Received shutdown signal, test time was about 10.000000 seconds 00:45:45.301 00:45:45.301 Latency(us) 00:45:45.301 [2024-12-09T05:36:32.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:45.301 [2024-12-09T05:36:32.273Z] =================================================================================================================== 00:45:45.301 [2024-12-09T05:36:32.273Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71237' 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71237 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71237 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lBxTV8yP4i 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lBxTV8yP4i 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lBxTV8yP4i 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lBxTV8yP4i 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71376 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71376 /var/tmp/bdevperf.sock 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71376 ']' 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:45.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:45.301 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:45.560 [2024-12-09 05:36:32.292254] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:45.560 [2024-12-09 05:36:32.292352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ] 00:45:45.560 [2024-12-09 05:36:32.441646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:45.560 [2024-12-09 05:36:32.469830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:45.560 [2024-12-09 05:36:32.497082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:45.819 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:45.819 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:45.819 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lBxTV8yP4i 00:45:45.819 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:45:46.078 [2024-12-09 05:36:33.011326] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:46.078 [2024-12-09 05:36:33.016094] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:46.078 [2024-12-09 05:36:33.016788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80cff0 (107): Transport endpoint is not connected 00:45:46.078 [2024-12-09 05:36:33.017776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80cff0 (9): Bad file descriptor 00:45:46.078 [2024-12-09 05:36:33.018773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:45:46.078 [2024-12-09 05:36:33.018792] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:45:46.078 [2024-12-09 05:36:33.018801] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:45:46.078 [2024-12-09 05:36:33.018814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:45:46.078 request: 00:45:46.078 { 00:45:46.078 "name": "TLSTEST", 00:45:46.078 "trtype": "tcp", 00:45:46.078 "traddr": "10.0.0.3", 00:45:46.078 "adrfam": "ipv4", 00:45:46.078 "trsvcid": "4420", 00:45:46.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:46.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:46.079 "prchk_reftag": false, 00:45:46.079 "prchk_guard": false, 00:45:46.079 "hdgst": false, 00:45:46.079 "ddgst": false, 00:45:46.079 "psk": "key0", 00:45:46.079 "allow_unrecognized_csi": false, 00:45:46.079 "method": "bdev_nvme_attach_controller", 00:45:46.079 "req_id": 1 00:45:46.079 } 00:45:46.079 Got JSON-RPC error response 00:45:46.079 response: 00:45:46.079 { 00:45:46.079 "code": -5, 00:45:46.079 "message": "Input/output error" 00:45:46.079 } 00:45:46.079 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71376 00:45:46.079 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71376 ']' 00:45:46.079 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71376 00:45:46.079 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:45:46.079 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:46.079 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71376 00:45:46.338 killing process with pid 71376 00:45:46.338 Received shutdown signal, test time was about 10.000000 seconds 00:45:46.338 00:45:46.338 Latency(us) 00:45:46.338 [2024-12-09T05:36:33.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:46.338 [2024-12-09T05:36:33.310Z] =================================================================================================================== 00:45:46.338 [2024-12-09T05:36:33.310Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71376' 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71376 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71376 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FuAGoozGRN 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FuAGoozGRN 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FuAGoozGRN 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FuAGoozGRN 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71392 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71392 /var/tmp/bdevperf.sock 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71392 ']' 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:46.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:46.338 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:46.338 [2024-12-09 05:36:33.282627] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:46.338 [2024-12-09 05:36:33.282717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:45:46.597 [2024-12-09 05:36:33.423932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:46.597 [2024-12-09 05:36:33.452420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:46.597 [2024-12-09 05:36:33.480321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:47.533 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:47.533 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:47.533 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FuAGoozGRN 00:45:47.533 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:45:47.792 [2024-12-09 05:36:34.634862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:47.792 [2024-12-09 05:36:34.645615] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:45:47.792 [2024-12-09 05:36:34.645849] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:45:47.792 [2024-12-09 05:36:34.646019] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:47.792 [2024-12-09 05:36:34.646448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216bff0 (107): Transport endpoint is not connected 00:45:47.792 [2024-12-09 05:36:34.647438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216bff0 (9): Bad file descriptor 00:45:47.792 [2024-12-09 05:36:34.648435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:45:47.792 [2024-12-09 05:36:34.648456] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:45:47.792 [2024-12-09 05:36:34.648477] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:45:47.792 [2024-12-09 05:36:34.648490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:45:47.792 request: 00:45:47.792 { 00:45:47.792 "name": "TLSTEST", 00:45:47.792 "trtype": "tcp", 00:45:47.792 "traddr": "10.0.0.3", 00:45:47.792 "adrfam": "ipv4", 00:45:47.792 "trsvcid": "4420", 00:45:47.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:47.792 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:47.792 "prchk_reftag": false, 00:45:47.792 "prchk_guard": false, 00:45:47.792 "hdgst": false, 00:45:47.792 "ddgst": false, 00:45:47.792 "psk": "key0", 00:45:47.792 "allow_unrecognized_csi": false, 00:45:47.792 "method": "bdev_nvme_attach_controller", 00:45:47.792 "req_id": 1 00:45:47.792 } 00:45:47.792 Got JSON-RPC error response 00:45:47.792 response: 00:45:47.792 { 00:45:47.792 "code": -5, 00:45:47.792 "message": "Input/output error" 00:45:47.792 } 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71392 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71392 ']' 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71392 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71392 00:45:47.792 killing process with pid 71392 00:45:47.792 Received shutdown signal, test time was about 10.000000 seconds 00:45:47.792 00:45:47.792 Latency(us) 00:45:47.792 [2024-12-09T05:36:34.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:47.792 [2024-12-09T05:36:34.764Z] =================================================================================================================== 00:45:47.792 [2024-12-09T05:36:34.764Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71392' 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71392 00:45:47.792 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71392 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FuAGoozGRN 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FuAGoozGRN 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:45:48.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FuAGoozGRN 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FuAGoozGRN 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71421 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71421 /var/tmp/bdevperf.sock 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71421 ']' 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:48.051 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:48.051 [2024-12-09 05:36:34.916174] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:48.051 [2024-12-09 05:36:34.916466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71421 ] 00:45:48.319 [2024-12-09 05:36:35.059319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:48.319 [2024-12-09 05:36:35.086837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:48.319 [2024-12-09 05:36:35.114938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:49.253 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:49.253 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:49.253 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FuAGoozGRN 00:45:49.253 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:45:49.512 [2024-12-09 05:36:36.345083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:49.512 [2024-12-09 05:36:36.355884] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:45:49.512 [2024-12-09 05:36:36.356144] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:45:49.512 [2024-12-09 05:36:36.356313] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:49.512 [2024-12-09 05:36:36.356799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a2ff0 (107): Transport endpoint is not connected 00:45:49.513 [2024-12-09 05:36:36.357789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a2ff0 (9): Bad file descriptor 00:45:49.513 [2024-12-09 05:36:36.358785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:45:49.513 [2024-12-09 05:36:36.358803] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:45:49.513 [2024-12-09 05:36:36.358813] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:45:49.513 [2024-12-09 05:36:36.358826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:45:49.513 request: 00:45:49.513 { 00:45:49.513 "name": "TLSTEST", 00:45:49.513 "trtype": "tcp", 00:45:49.513 "traddr": "10.0.0.3", 00:45:49.513 "adrfam": "ipv4", 00:45:49.513 "trsvcid": "4420", 00:45:49.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:49.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:49.513 "prchk_reftag": false, 00:45:49.513 "prchk_guard": false, 00:45:49.513 "hdgst": false, 00:45:49.513 "ddgst": false, 00:45:49.513 "psk": "key0", 00:45:49.513 "allow_unrecognized_csi": false, 00:45:49.513 "method": "bdev_nvme_attach_controller", 00:45:49.513 "req_id": 1 00:45:49.513 } 00:45:49.513 Got JSON-RPC error response 00:45:49.513 response: 00:45:49.513 { 00:45:49.513 "code": -5, 00:45:49.513 "message": "Input/output error" 00:45:49.513 } 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71421 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71421 ']' 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71421 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71421 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71421' 00:45:49.513 killing process with pid 71421 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71421 00:45:49.513 Received shutdown signal, test time was about 10.000000 seconds 00:45:49.513 00:45:49.513 Latency(us) 00:45:49.513 [2024-12-09T05:36:36.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:49.513 [2024-12-09T05:36:36.485Z] =================================================================================================================== 00:45:49.513 [2024-12-09T05:36:36.485Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:49.513 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71421 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:49.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71455 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71455 /var/tmp/bdevperf.sock 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71455 ']' 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:49.772 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:49.772 [2024-12-09 05:36:36.635833] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:49.772 [2024-12-09 05:36:36.635927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:45:50.033 [2024-12-09 05:36:36.777472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:50.033 [2024-12-09 05:36:36.805735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:50.033 [2024-12-09 05:36:36.834284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:50.601 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:50.601 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:50.601 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:45:50.860 [2024-12-09 05:36:37.756361] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:45:50.860 [2024-12-09 05:36:37.756399] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:50.860 request: 00:45:50.860 { 00:45:50.860 "name": "key0", 00:45:50.860 "path": "", 00:45:50.860 "method": "keyring_file_add_key", 00:45:50.860 "req_id": 1 00:45:50.860 } 00:45:50.860 Got JSON-RPC error response 00:45:50.860 response: 00:45:50.860 { 00:45:50.860 "code": -1, 00:45:50.860 "message": "Operation not permitted" 00:45:50.860 } 00:45:50.860 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:45:51.120 [2024-12-09 05:36:38.024502] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:51.120 [2024-12-09 05:36:38.024568] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:45:51.120 request: 00:45:51.120 { 00:45:51.120 "name": "TLSTEST", 00:45:51.120 "trtype": "tcp", 00:45:51.120 "traddr": "10.0.0.3", 00:45:51.120 "adrfam": "ipv4", 00:45:51.120 "trsvcid": "4420", 00:45:51.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:51.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:51.120 "prchk_reftag": false, 00:45:51.120 "prchk_guard": false, 00:45:51.120 "hdgst": false, 00:45:51.120 "ddgst": false, 00:45:51.120 "psk": "key0", 00:45:51.120 "allow_unrecognized_csi": false, 00:45:51.120 "method": "bdev_nvme_attach_controller", 00:45:51.120 "req_id": 1 00:45:51.120 } 00:45:51.120 Got JSON-RPC error response 00:45:51.120 response: 00:45:51.120 { 00:45:51.120 "code": -126, 00:45:51.120 "message": "Required key not available" 00:45:51.120 } 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71455 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71455 ']' 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71455 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71455 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71455' 00:45:51.120 killing process with pid 71455 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71455 00:45:51.120 Received shutdown signal, test time was about 10.000000 seconds 00:45:51.120 00:45:51.120 Latency(us) 00:45:51.120 [2024-12-09T05:36:38.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:51.120 [2024-12-09T05:36:38.092Z] =================================================================================================================== 00:45:51.120 [2024-12-09T05:36:38.092Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:51.120 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71455 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71010 00:45:51.379 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71010 ']' 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71010 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71010 00:45:51.380 killing process with pid 71010 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71010' 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71010 00:45:51.380 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71010 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gvIUu5FAoY 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gvIUu5FAoY 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71494 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71494 00:45:51.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71494 ']' 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:51.639 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:51.639 [2024-12-09 05:36:38.557276] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:51.639 [2024-12-09 05:36:38.557361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:51.898 [2024-12-09 05:36:38.693262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:51.898 [2024-12-09 05:36:38.721224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:51.898 [2024-12-09 05:36:38.721271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:51.898 [2024-12-09 05:36:38.721297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:51.898 [2024-12-09 05:36:38.721303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:51.898 [2024-12-09 05:36:38.721310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:51.898 [2024-12-09 05:36:38.721615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:51.899 [2024-12-09 05:36:38.750187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gvIUu5FAoY 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gvIUu5FAoY 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:45:52.835 [2024-12-09 05:36:39.749345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:52.835 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:45:53.094 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:45:53.354 [2024-12-09 05:36:40.321475] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:53.354 [2024-12-09 05:36:40.321747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:53.613 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:45:53.613 malloc0 00:45:53.613 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:53.872 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:45:54.131 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gvIUu5FAoY 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gvIUu5FAoY 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71549 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71549 /var/tmp/bdevperf.sock 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71549 ']' 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:54.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:54.414 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:54.414 [2024-12-09 05:36:41.255234] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:54.414 [2024-12-09 05:36:41.255474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71549 ] 00:45:54.738 [2024-12-09 05:36:41.407175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:54.738 [2024-12-09 05:36:41.447152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:54.738 [2024-12-09 05:36:41.481003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:55.320 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:55.320 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:55.320 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:45:55.604 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:45:55.863 [2024-12-09 05:36:42.615688] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:55.863 TLSTESTn1 00:45:55.863 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:45:55.863 Running I/O for 10 seconds... 00:45:58.175 4550.00 IOPS, 17.77 MiB/s [2024-12-09T05:36:46.081Z] 4608.00 IOPS, 18.00 MiB/s [2024-12-09T05:36:47.017Z] 4608.00 IOPS, 18.00 MiB/s [2024-12-09T05:36:47.953Z] 4657.75 IOPS, 18.19 MiB/s [2024-12-09T05:36:48.888Z] 4681.80 IOPS, 18.29 MiB/s [2024-12-09T05:36:49.821Z] 4703.50 IOPS, 18.37 MiB/s [2024-12-09T05:36:51.199Z] 4710.71 IOPS, 18.40 MiB/s [2024-12-09T05:36:52.134Z] 4714.12 IOPS, 18.41 MiB/s [2024-12-09T05:36:53.070Z] 4718.56 IOPS, 18.43 MiB/s [2024-12-09T05:36:53.070Z] 4723.80 IOPS, 18.45 MiB/s 00:46:06.098 Latency(us) 00:46:06.098 [2024-12-09T05:36:53.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:06.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:06.098 Verification LBA range: start 0x0 length 0x2000 00:46:06.098 TLSTESTn1 : 10.01 4729.55 18.47 0.00 0.00 27018.07 5183.30 20137.43 00:46:06.099 [2024-12-09T05:36:53.071Z] =================================================================================================================== 00:46:06.099 [2024-12-09T05:36:53.071Z] Total : 4729.55 18.47 0.00 0.00 27018.07 5183.30 20137.43 00:46:06.099 { 00:46:06.099 "results": [ 00:46:06.099 { 00:46:06.099 "job": "TLSTESTn1", 00:46:06.099 "core_mask": "0x4", 00:46:06.099 "workload": "verify", 00:46:06.099 "status": "finished", 00:46:06.099 "verify_range": { 00:46:06.099 "start": 0, 00:46:06.099 "length": 8192 00:46:06.099 }, 00:46:06.099 "queue_depth": 128, 00:46:06.099 "io_size": 4096, 00:46:06.099 "runtime": 10.014051, 00:46:06.099 "iops": 4729.55450296788, 00:46:06.099 "mibps": 18.47482227721828, 00:46:06.099 "io_failed": 0, 00:46:06.099 "io_timeout": 0, 00:46:06.099 "avg_latency_us": 27018.072714681122, 00:46:06.099 "min_latency_us": 5183.301818181818, 00:46:06.099 "max_latency_us": 20137.425454545453 00:46:06.099 } 00:46:06.099 ], 00:46:06.099 "core_count": 1 00:46:06.099 } 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71549 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71549 ']' 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71549 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71549 00:46:06.099 killing process with pid 71549 00:46:06.099 Received shutdown signal, test time was about 10.000000 seconds 00:46:06.099 00:46:06.099 Latency(us) 00:46:06.099 [2024-12-09T05:36:53.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:06.099 [2024-12-09T05:36:53.071Z] =================================================================================================================== 00:46:06.099 [2024-12-09T05:36:53.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71549' 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71549 00:46:06.099 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71549 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gvIUu5FAoY 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gvIUu5FAoY 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gvIUu5FAoY 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gvIUu5FAoY 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gvIUu5FAoY 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71685 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71685 /var/tmp/bdevperf.sock 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71685 ']' 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:06.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:06.099 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:06.358 [2024-12-09 05:36:53.093057] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:06.358 [2024-12-09 05:36:53.093339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71685 ] 00:46:06.358 [2024-12-09 05:36:53.237870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:06.358 [2024-12-09 05:36:53.266056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:06.358 [2024-12-09 05:36:53.294578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:07.293 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:07.293 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:07.293 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:07.293 [2024-12-09 05:36:54.252899] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gvIUu5FAoY': 0100666 00:46:07.293 [2024-12-09 05:36:54.252933] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:07.293 request: 00:46:07.293 { 00:46:07.293 "name": "key0", 00:46:07.293 "path": "/tmp/tmp.gvIUu5FAoY", 00:46:07.293 "method": "keyring_file_add_key", 00:46:07.293 "req_id": 1 00:46:07.293 } 00:46:07.293 Got JSON-RPC error response 00:46:07.293 response: 00:46:07.293 { 00:46:07.293 "code": -1, 00:46:07.293 "message": "Operation not permitted" 00:46:07.293 } 00:46:07.552 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:07.811 [2024-12-09 05:36:54.525075] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:07.811 [2024-12-09 05:36:54.525130] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:46:07.811 request: 00:46:07.811 { 00:46:07.811 "name": "TLSTEST", 00:46:07.811 "trtype": "tcp", 00:46:07.811 "traddr": "10.0.0.3", 00:46:07.811 "adrfam": "ipv4", 00:46:07.811 "trsvcid": "4420", 00:46:07.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:07.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:07.811 "prchk_reftag": false, 00:46:07.811 "prchk_guard": false, 00:46:07.811 "hdgst": false, 00:46:07.811 "ddgst": false, 00:46:07.811 "psk": "key0", 00:46:07.811 "allow_unrecognized_csi": false, 00:46:07.811 "method": "bdev_nvme_attach_controller", 00:46:07.811 "req_id": 1 00:46:07.811 } 00:46:07.811 Got JSON-RPC error response 00:46:07.811 response: 00:46:07.811 { 00:46:07.811 "code": -126, 00:46:07.811 "message": "Required key not available" 00:46:07.811 } 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71685 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71685 ']' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71685 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71685 00:46:07.811 killing process with pid 71685 00:46:07.811 Received shutdown signal, test time was about 10.000000 seconds 00:46:07.811 00:46:07.811 Latency(us) 00:46:07.811 [2024-12-09T05:36:54.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:07.811 [2024-12-09T05:36:54.783Z] =================================================================================================================== 00:46:07.811 [2024-12-09T05:36:54.783Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71685' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71685 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71685 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71494 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71494 ']' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71494 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71494 00:46:07.811 killing process with pid 71494 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71494' 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71494 00:46:07.811 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71494 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:08.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71724 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71724 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71724 ']' 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:08.070 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:08.070 [2024-12-09 05:36:54.992687] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:08.070 [2024-12-09 05:36:54.993381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:08.333 [2024-12-09 05:36:55.140370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:08.333 [2024-12-09 05:36:55.167592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:08.333 [2024-12-09 05:36:55.167858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:08.333 [2024-12-09 05:36:55.168250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:08.333 [2024-12-09 05:36:55.168576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:08.333 [2024-12-09 05:36:55.168681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:08.333 [2024-12-09 05:36:55.169133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:08.333 [2024-12-09 05:36:55.196511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gvIUu5FAoY 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gvIUu5FAoY 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.gvIUu5FAoY 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gvIUu5FAoY 00:46:08.333 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:08.595 [2024-12-09 05:36:55.483358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:08.595 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:08.852 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:46:09.109 [2024-12-09 05:36:55.967421] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:09.109 [2024-12-09 05:36:55.967913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:09.110 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:09.367 malloc0 00:46:09.367 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:09.625 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:09.883 [2024-12-09 05:36:56.672532] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gvIUu5FAoY': 0100666 00:46:09.883 [2024-12-09 05:36:56.672576] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:09.883 request: 00:46:09.883 { 00:46:09.883 "name": "key0", 00:46:09.883 "path": "/tmp/tmp.gvIUu5FAoY", 00:46:09.883 "method": "keyring_file_add_key", 00:46:09.883 "req_id": 1 00:46:09.883 } 00:46:09.883 Got JSON-RPC error response 00:46:09.883 response: 00:46:09.883 { 00:46:09.883 "code": -1, 00:46:09.883 "message": "Operation not permitted" 00:46:09.883 } 00:46:09.883 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:10.142 [2024-12-09 05:36:56.900627] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:46:10.142 [2024-12-09 05:36:56.900677] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:46:10.142 request: 00:46:10.142 { 00:46:10.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:10.142 "host": "nqn.2016-06.io.spdk:host1", 00:46:10.142 "psk": "key0", 00:46:10.142 "method": "nvmf_subsystem_add_host", 00:46:10.142 "req_id": 1 00:46:10.142 } 00:46:10.142 Got JSON-RPC error response 00:46:10.142 response: 00:46:10.142 { 00:46:10.142 "code": -32603, 00:46:10.142 "message": "Internal error" 00:46:10.142 } 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71724 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71724 ']' 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71724 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71724 00:46:10.142 killing process with pid 71724 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71724' 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71724 00:46:10.142 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71724 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gvIUu5FAoY 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71780 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71780 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71780 ']' 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:10.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:10.400 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:10.400 [2024-12-09 05:36:57.178196] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:10.401 [2024-12-09 05:36:57.178319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:10.401 [2024-12-09 05:36:57.324853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:10.401 [2024-12-09 05:36:57.351490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:10.401 [2024-12-09 05:36:57.351543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:10.401 [2024-12-09 05:36:57.351553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:10.401 [2024-12-09 05:36:57.351570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:10.401 [2024-12-09 05:36:57.351594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:10.401 [2024-12-09 05:36:57.351869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:10.659 [2024-12-09 05:36:57.379332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gvIUu5FAoY 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gvIUu5FAoY 00:46:11.228 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:11.486 [2024-12-09 05:36:58.385349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:11.486 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:11.745 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:46:12.003 [2024-12-09 05:36:58.813427] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:12.003 [2024-12-09 05:36:58.813638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:12.003 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:12.262 malloc0 00:46:12.262 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:12.522 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:12.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71830 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71830 /var/tmp/bdevperf.sock 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71830 ']' 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:12.781 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:13.040 [2024-12-09 05:36:59.780486] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:13.040 [2024-12-09 05:36:59.780593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71830 ] 00:46:13.040 [2024-12-09 05:36:59.933136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:13.040 [2024-12-09 05:36:59.971712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:13.040 [2024-12-09 05:37:00.004219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:13.977 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:13.977 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:13.977 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:13.977 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:14.236 [2024-12-09 05:37:01.111210] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:14.236 TLSTESTn1 00:46:14.236 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:46:14.804 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:46:14.804 "subsystems": [ 00:46:14.804 { 00:46:14.804 "subsystem": "keyring", 00:46:14.804 "config": [ 00:46:14.804 { 00:46:14.804 "method": "keyring_file_add_key", 00:46:14.804 "params": { 00:46:14.804 "name": "key0", 00:46:14.804 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:14.804 } 00:46:14.804 } 00:46:14.804 ] 00:46:14.804 }, 00:46:14.804 { 00:46:14.804 "subsystem": "iobuf", 00:46:14.804 "config": [ 00:46:14.804 { 00:46:14.804 "method": "iobuf_set_options", 00:46:14.804 "params": { 00:46:14.804 "small_pool_count": 8192, 00:46:14.804 "large_pool_count": 1024, 00:46:14.804 "small_bufsize": 8192, 00:46:14.804 "large_bufsize": 135168, 00:46:14.804 "enable_numa": false 00:46:14.804 } 00:46:14.804 } 00:46:14.804 ] 00:46:14.804 }, 00:46:14.804 { 00:46:14.804 "subsystem": "sock", 00:46:14.804 "config": [ 00:46:14.804 { 00:46:14.804 "method": "sock_set_default_impl", 00:46:14.804 "params": { 00:46:14.804 "impl_name": "uring" 00:46:14.804 } 00:46:14.804 }, 00:46:14.804 { 00:46:14.804 "method": "sock_impl_set_options", 00:46:14.804 "params": { 00:46:14.804 "impl_name": "ssl", 00:46:14.804 "recv_buf_size": 4096, 00:46:14.804 "send_buf_size": 4096, 00:46:14.804 "enable_recv_pipe": true, 00:46:14.804 "enable_quickack": false, 00:46:14.804 "enable_placement_id": 0, 00:46:14.804 "enable_zerocopy_send_server": true, 00:46:14.804 "enable_zerocopy_send_client": false, 00:46:14.804 "zerocopy_threshold": 0, 00:46:14.804 "tls_version": 0, 00:46:14.804 "enable_ktls": false 00:46:14.804 } 00:46:14.804 }, 00:46:14.804 { 00:46:14.804 "method": "sock_impl_set_options", 00:46:14.804 "params": { 00:46:14.805 "impl_name": "posix", 00:46:14.805 "recv_buf_size": 2097152, 00:46:14.805 "send_buf_size": 2097152, 00:46:14.805 "enable_recv_pipe": true, 00:46:14.805 "enable_quickack": false, 00:46:14.805 "enable_placement_id": 0, 00:46:14.805 "enable_zerocopy_send_server": true, 00:46:14.805 "enable_zerocopy_send_client": false, 00:46:14.805 "zerocopy_threshold": 0, 00:46:14.805 "tls_version": 0, 00:46:14.805 "enable_ktls": false 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "sock_impl_set_options", 00:46:14.805 "params": { 00:46:14.805 "impl_name": "uring", 00:46:14.805 "recv_buf_size": 2097152, 00:46:14.805 "send_buf_size": 2097152, 00:46:14.805 "enable_recv_pipe": true, 00:46:14.805 "enable_quickack": false, 00:46:14.805 "enable_placement_id": 0, 00:46:14.805 "enable_zerocopy_send_server": false, 00:46:14.805 "enable_zerocopy_send_client": false, 00:46:14.805 "zerocopy_threshold": 0, 00:46:14.805 "tls_version": 0, 00:46:14.805 "enable_ktls": false 00:46:14.805 } 00:46:14.805 } 00:46:14.805 ] 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "subsystem": "vmd", 00:46:14.805 "config": [] 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "subsystem": "accel", 00:46:14.805 "config": [ 00:46:14.805 { 00:46:14.805 "method": "accel_set_options", 00:46:14.805 "params": { 00:46:14.805 "small_cache_size": 128, 00:46:14.805 "large_cache_size": 16, 00:46:14.805 "task_count": 2048, 00:46:14.805 "sequence_count": 2048, 00:46:14.805 "buf_count": 2048 00:46:14.805 } 00:46:14.805 } 00:46:14.805 ] 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "subsystem": "bdev", 00:46:14.805 "config": [ 00:46:14.805 { 00:46:14.805 "method": "bdev_set_options", 00:46:14.805 "params": { 00:46:14.805 "bdev_io_pool_size": 65535, 00:46:14.805 "bdev_io_cache_size": 256, 00:46:14.805 "bdev_auto_examine": true, 00:46:14.805 "iobuf_small_cache_size": 128, 00:46:14.805 "iobuf_large_cache_size": 16 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "bdev_raid_set_options", 00:46:14.805 "params": { 00:46:14.805 "process_window_size_kb": 1024, 00:46:14.805 "process_max_bandwidth_mb_sec": 0 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "bdev_iscsi_set_options", 00:46:14.805 "params": { 00:46:14.805 "timeout_sec": 30 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "bdev_nvme_set_options", 00:46:14.805 "params": { 00:46:14.805 "action_on_timeout": "none", 00:46:14.805 "timeout_us": 0, 00:46:14.805 "timeout_admin_us": 0, 00:46:14.805 "keep_alive_timeout_ms": 10000, 00:46:14.805 "arbitration_burst": 0, 00:46:14.805 "low_priority_weight": 0, 00:46:14.805 "medium_priority_weight": 0, 00:46:14.805 "high_priority_weight": 0, 00:46:14.805 "nvme_adminq_poll_period_us": 10000, 00:46:14.805 "nvme_ioq_poll_period_us": 0, 00:46:14.805 "io_queue_requests": 0, 00:46:14.805 "delay_cmd_submit": true, 00:46:14.805 "transport_retry_count": 4, 00:46:14.805 "bdev_retry_count": 3, 00:46:14.805 "transport_ack_timeout": 0, 00:46:14.805 "ctrlr_loss_timeout_sec": 0, 00:46:14.805 "reconnect_delay_sec": 0, 00:46:14.805 "fast_io_fail_timeout_sec": 0, 00:46:14.805 "disable_auto_failback": false, 00:46:14.805 "generate_uuids": false, 00:46:14.805 "transport_tos": 0, 00:46:14.805 "nvme_error_stat": false, 00:46:14.805 "rdma_srq_size": 0, 00:46:14.805 "io_path_stat": false, 00:46:14.805 "allow_accel_sequence": false, 00:46:14.805 "rdma_max_cq_size": 0, 00:46:14.805 "rdma_cm_event_timeout_ms": 0, 00:46:14.805 "dhchap_digests": [ 00:46:14.805 "sha256", 00:46:14.805 "sha384", 00:46:14.805 "sha512" 00:46:14.805 ], 00:46:14.805 "dhchap_dhgroups": [ 00:46:14.805 "null", 00:46:14.805 "ffdhe2048", 00:46:14.805 "ffdhe3072", 00:46:14.805 "ffdhe4096", 00:46:14.805 "ffdhe6144", 00:46:14.805 "ffdhe8192" 00:46:14.805 ] 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "bdev_nvme_set_hotplug", 00:46:14.805 "params": { 00:46:14.805 "period_us": 100000, 00:46:14.805 "enable": false 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "bdev_malloc_create", 00:46:14.805 "params": { 00:46:14.805 "name": "malloc0", 00:46:14.805 "num_blocks": 8192, 00:46:14.805 "block_size": 4096, 00:46:14.805 "physical_block_size": 4096, 00:46:14.805 "uuid": "a8749477-f948-4721-b198-a69276a402b2", 00:46:14.805 "optimal_io_boundary": 0, 00:46:14.805 "md_size": 0, 00:46:14.805 "dif_type": 0, 00:46:14.805 "dif_is_head_of_md": false, 00:46:14.805 "dif_pi_format": 0 00:46:14.805 } 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "method": "bdev_wait_for_examine" 00:46:14.805 } 00:46:14.805 ] 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "subsystem": "nbd", 00:46:14.805 "config": [] 00:46:14.805 }, 00:46:14.805 { 00:46:14.805 "subsystem": "scheduler", 00:46:14.805 "config": [ 00:46:14.805 { 00:46:14.805 "method": "framework_set_scheduler", 00:46:14.805 "params": { 00:46:14.806 "name": "static" 00:46:14.806 } 00:46:14.806 } 00:46:14.806 ] 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "subsystem": "nvmf", 00:46:14.806 "config": [ 00:46:14.806 { 00:46:14.806 "method": "nvmf_set_config", 00:46:14.806 "params": { 00:46:14.806 "discovery_filter": "match_any", 00:46:14.806 "admin_cmd_passthru": { 00:46:14.806 "identify_ctrlr": false 00:46:14.806 }, 00:46:14.806 "dhchap_digests": [ 00:46:14.806 "sha256", 00:46:14.806 "sha384", 00:46:14.806 "sha512" 00:46:14.806 ], 00:46:14.806 "dhchap_dhgroups": [ 00:46:14.806 "null", 00:46:14.806 "ffdhe2048", 00:46:14.806 "ffdhe3072", 00:46:14.806 "ffdhe4096", 00:46:14.806 "ffdhe6144", 00:46:14.806 "ffdhe8192" 00:46:14.806 ] 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_set_max_subsystems", 00:46:14.806 "params": { 00:46:14.806 "max_subsystems": 1024 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_set_crdt", 00:46:14.806 "params": { 00:46:14.806 "crdt1": 0, 00:46:14.806 "crdt2": 0, 00:46:14.806 "crdt3": 0 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_create_transport", 00:46:14.806 "params": { 00:46:14.806 "trtype": "TCP", 00:46:14.806 "max_queue_depth": 128, 00:46:14.806 "max_io_qpairs_per_ctrlr": 127, 00:46:14.806 "in_capsule_data_size": 4096, 00:46:14.806 "max_io_size": 131072, 00:46:14.806 "io_unit_size": 131072, 00:46:14.806 "max_aq_depth": 128, 00:46:14.806 "num_shared_buffers": 511, 00:46:14.806 "buf_cache_size": 4294967295, 00:46:14.806 "dif_insert_or_strip": false, 00:46:14.806 "zcopy": false, 00:46:14.806 "c2h_success": false, 00:46:14.806 "sock_priority": 0, 00:46:14.806 "abort_timeout_sec": 1, 00:46:14.806 "ack_timeout": 0, 00:46:14.806 "data_wr_pool_size": 0 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_create_subsystem", 00:46:14.806 "params": { 00:46:14.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:14.806 "allow_any_host": false, 00:46:14.806 "serial_number": "SPDK00000000000001", 00:46:14.806 "model_number": "SPDK bdev Controller", 00:46:14.806 "max_namespaces": 10, 00:46:14.806 "min_cntlid": 1, 00:46:14.806 "max_cntlid": 65519, 00:46:14.806 "ana_reporting": false 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_subsystem_add_host", 00:46:14.806 "params": { 00:46:14.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:14.806 "host": "nqn.2016-06.io.spdk:host1", 00:46:14.806 "psk": "key0" 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_subsystem_add_ns", 00:46:14.806 "params": { 00:46:14.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:14.806 "namespace": { 00:46:14.806 "nsid": 1, 00:46:14.806 "bdev_name": "malloc0", 00:46:14.806 "nguid": "A8749477F9484721B198A69276A402B2", 00:46:14.806 "uuid": "a8749477-f948-4721-b198-a69276a402b2", 00:46:14.806 "no_auto_visible": false 00:46:14.806 } 00:46:14.806 } 00:46:14.806 }, 00:46:14.806 { 00:46:14.806 "method": "nvmf_subsystem_add_listener", 00:46:14.806 "params": { 00:46:14.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:14.806 "listen_address": { 00:46:14.806 "trtype": "TCP", 00:46:14.806 "adrfam": "IPv4", 00:46:14.806 "traddr": "10.0.0.3", 00:46:14.806 "trsvcid": "4420" 00:46:14.806 }, 00:46:14.806 "secure_channel": true 00:46:14.806 } 00:46:14.806 } 00:46:14.806 ] 00:46:14.806 } 00:46:14.806 ] 00:46:14.806 }' 00:46:14.806 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:46:15.067 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:46:15.067 "subsystems": [ 00:46:15.067 { 00:46:15.067 "subsystem": "keyring", 00:46:15.067 "config": [ 00:46:15.067 { 00:46:15.067 "method": "keyring_file_add_key", 00:46:15.067 "params": { 00:46:15.067 "name": "key0", 00:46:15.067 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:15.067 } 00:46:15.067 } 00:46:15.067 ] 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "subsystem": "iobuf", 00:46:15.067 "config": [ 00:46:15.067 { 00:46:15.067 "method": "iobuf_set_options", 00:46:15.067 "params": { 00:46:15.067 "small_pool_count": 8192, 00:46:15.067 "large_pool_count": 1024, 00:46:15.067 "small_bufsize": 8192, 00:46:15.067 "large_bufsize": 135168, 00:46:15.067 "enable_numa": false 00:46:15.067 } 00:46:15.067 } 00:46:15.067 ] 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "subsystem": "sock", 00:46:15.067 "config": [ 00:46:15.067 { 00:46:15.067 "method": "sock_set_default_impl", 00:46:15.067 "params": { 00:46:15.067 "impl_name": "uring" 00:46:15.067 } 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "method": "sock_impl_set_options", 00:46:15.067 "params": { 00:46:15.067 "impl_name": "ssl", 00:46:15.067 "recv_buf_size": 4096, 00:46:15.067 "send_buf_size": 4096, 00:46:15.067 "enable_recv_pipe": true, 00:46:15.067 "enable_quickack": false, 00:46:15.067 "enable_placement_id": 0, 00:46:15.067 "enable_zerocopy_send_server": true, 00:46:15.067 "enable_zerocopy_send_client": false, 00:46:15.067 "zerocopy_threshold": 0, 00:46:15.067 "tls_version": 0, 00:46:15.067 "enable_ktls": false 00:46:15.067 } 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "method": "sock_impl_set_options", 00:46:15.067 "params": { 00:46:15.067 "impl_name": "posix", 00:46:15.067 "recv_buf_size": 2097152, 00:46:15.067 "send_buf_size": 2097152, 00:46:15.067 "enable_recv_pipe": true, 00:46:15.067 "enable_quickack": false, 00:46:15.067 "enable_placement_id": 0, 00:46:15.067 "enable_zerocopy_send_server": true, 00:46:15.067 "enable_zerocopy_send_client": false, 00:46:15.067 "zerocopy_threshold": 0, 00:46:15.067 "tls_version": 0, 00:46:15.067 "enable_ktls": false 00:46:15.067 } 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "method": "sock_impl_set_options", 00:46:15.067 "params": { 00:46:15.067 "impl_name": "uring", 00:46:15.067 "recv_buf_size": 2097152, 00:46:15.067 "send_buf_size": 2097152, 00:46:15.067 "enable_recv_pipe": true, 00:46:15.067 "enable_quickack": false, 00:46:15.067 "enable_placement_id": 0, 00:46:15.067 "enable_zerocopy_send_server": false, 00:46:15.067 "enable_zerocopy_send_client": false, 00:46:15.067 "zerocopy_threshold": 0, 00:46:15.067 "tls_version": 0, 00:46:15.067 "enable_ktls": false 00:46:15.067 } 00:46:15.067 } 00:46:15.067 ] 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "subsystem": "vmd", 00:46:15.067 "config": [] 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "subsystem": "accel", 00:46:15.067 "config": [ 00:46:15.067 { 00:46:15.067 "method": "accel_set_options", 00:46:15.067 "params": { 00:46:15.067 "small_cache_size": 128, 00:46:15.067 "large_cache_size": 16, 00:46:15.067 "task_count": 2048, 00:46:15.067 "sequence_count": 2048, 00:46:15.067 "buf_count": 2048 00:46:15.067 } 00:46:15.067 } 00:46:15.067 ] 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "subsystem": "bdev", 00:46:15.067 "config": [ 00:46:15.067 { 00:46:15.067 "method": "bdev_set_options", 00:46:15.067 "params": { 00:46:15.067 "bdev_io_pool_size": 65535, 00:46:15.067 "bdev_io_cache_size": 256, 00:46:15.067 "bdev_auto_examine": true, 00:46:15.067 "iobuf_small_cache_size": 128, 00:46:15.067 "iobuf_large_cache_size": 16 00:46:15.067 } 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "method": "bdev_raid_set_options", 00:46:15.067 "params": { 00:46:15.067 "process_window_size_kb": 1024, 00:46:15.067 "process_max_bandwidth_mb_sec": 0 00:46:15.067 } 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "method": "bdev_iscsi_set_options", 00:46:15.067 "params": { 00:46:15.067 "timeout_sec": 30 00:46:15.067 } 00:46:15.067 }, 00:46:15.067 { 00:46:15.067 "method": "bdev_nvme_set_options", 00:46:15.068 "params": { 00:46:15.068 "action_on_timeout": "none", 00:46:15.068 "timeout_us": 0, 00:46:15.068 "timeout_admin_us": 0, 00:46:15.068 "keep_alive_timeout_ms": 10000, 00:46:15.068 "arbitration_burst": 0, 00:46:15.068 "low_priority_weight": 0, 00:46:15.068 "medium_priority_weight": 0, 00:46:15.068 "high_priority_weight": 0, 00:46:15.068 "nvme_adminq_poll_period_us": 10000, 00:46:15.068 "nvme_ioq_poll_period_us": 0, 00:46:15.068 "io_queue_requests": 512, 00:46:15.068 "delay_cmd_submit": true, 00:46:15.068 "transport_retry_count": 4, 00:46:15.068 "bdev_retry_count": 3, 00:46:15.068 "transport_ack_timeout": 0, 00:46:15.068 "ctrlr_loss_timeout_sec": 0, 00:46:15.068 "reconnect_delay_sec": 0, 00:46:15.068 "fast_io_fail_timeout_sec": 0, 00:46:15.068 "disable_auto_failback": false, 00:46:15.068 "generate_uuids": false, 00:46:15.068 "transport_tos": 0, 00:46:15.068 "nvme_error_stat": false, 00:46:15.068 "rdma_srq_size": 0, 00:46:15.068 "io_path_stat": false, 00:46:15.068 "allow_accel_sequence": false, 00:46:15.068 "rdma_max_cq_size": 0, 00:46:15.068 "rdma_cm_event_timeout_ms": 0, 00:46:15.068 "dhchap_digests": [ 00:46:15.068 "sha256", 00:46:15.068 "sha384", 00:46:15.068 "sha512" 00:46:15.068 ], 00:46:15.068 "dhchap_dhgroups": [ 00:46:15.068 "null", 00:46:15.068 "ffdhe2048", 00:46:15.068 "ffdhe3072", 00:46:15.068 "ffdhe4096", 00:46:15.068 "ffdhe6144", 00:46:15.068 "ffdhe8192" 00:46:15.068 ] 00:46:15.068 } 00:46:15.068 }, 00:46:15.068 { 00:46:15.068 "method": "bdev_nvme_attach_controller", 00:46:15.068 "params": { 00:46:15.068 "name": "TLSTEST", 00:46:15.068 "trtype": "TCP", 00:46:15.068 "adrfam": "IPv4", 00:46:15.068 "traddr": "10.0.0.3", 00:46:15.068 "trsvcid": "4420", 00:46:15.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:15.068 "prchk_reftag": false, 00:46:15.068 "prchk_guard": false, 00:46:15.068 "ctrlr_loss_timeout_sec": 0, 00:46:15.068 "reconnect_delay_sec": 0, 00:46:15.068 "fast_io_fail_timeout_sec": 0, 00:46:15.068 "psk": "key0", 00:46:15.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:15.068 "hdgst": false, 00:46:15.068 "ddgst": false, 00:46:15.068 "multipath": "multipath" 00:46:15.068 } 00:46:15.068 }, 00:46:15.068 { 00:46:15.068 "method": "bdev_nvme_set_hotplug", 00:46:15.068 "params": { 00:46:15.068 "period_us": 100000, 00:46:15.068 "enable": false 00:46:15.068 } 00:46:15.068 }, 00:46:15.068 { 00:46:15.068 "method": "bdev_wait_for_examine" 00:46:15.068 } 00:46:15.068 ] 00:46:15.068 }, 00:46:15.068 { 00:46:15.068 "subsystem": "nbd", 00:46:15.068 "config": [] 00:46:15.068 } 00:46:15.068 ] 00:46:15.068 }' 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71830 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71830 ']' 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71830 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71830 00:46:15.068 killing process with pid 71830 00:46:15.068 Received shutdown signal, test time was about 10.000000 seconds 00:46:15.068 00:46:15.068 Latency(us) 00:46:15.068 [2024-12-09T05:37:02.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:15.068 [2024-12-09T05:37:02.040Z] =================================================================================================================== 00:46:15.068 [2024-12-09T05:37:02.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71830' 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71830 00:46:15.068 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71830 00:46:15.328 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71780 00:46:15.328 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71780 ']' 00:46:15.328 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71780 00:46:15.328 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:15.328 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:15.328 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71780 00:46:15.329 killing process with pid 71780 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71780' 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71780 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71780 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:15.329 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:46:15.329 "subsystems": [ 00:46:15.329 { 00:46:15.329 "subsystem": "keyring", 00:46:15.329 "config": [ 00:46:15.329 { 00:46:15.329 "method": "keyring_file_add_key", 00:46:15.329 "params": { 00:46:15.329 "name": "key0", 00:46:15.329 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:15.329 } 00:46:15.329 } 00:46:15.329 ] 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "subsystem": "iobuf", 00:46:15.329 "config": [ 00:46:15.329 { 00:46:15.329 "method": "iobuf_set_options", 00:46:15.329 "params": { 00:46:15.329 "small_pool_count": 8192, 00:46:15.329 "large_pool_count": 1024, 00:46:15.329 "small_bufsize": 8192, 00:46:15.329 "large_bufsize": 135168, 00:46:15.329 "enable_numa": false 00:46:15.329 } 00:46:15.329 } 00:46:15.329 ] 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "subsystem": "sock", 00:46:15.329 "config": [ 00:46:15.329 { 00:46:15.329 "method": "sock_set_default_impl", 00:46:15.329 "params": { 00:46:15.329 "impl_name": "uring" 00:46:15.329 } 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "method": "sock_impl_set_options", 00:46:15.329 "params": { 00:46:15.329 "impl_name": "ssl", 00:46:15.329 "recv_buf_size": 4096, 00:46:15.329 "send_buf_size": 4096, 00:46:15.329 "enable_recv_pipe": true, 00:46:15.329 "enable_quickack": false, 00:46:15.329 "enable_placement_id": 0, 00:46:15.329 "enable_zerocopy_send_server": true, 00:46:15.329 "enable_zerocopy_send_client": false, 00:46:15.329 "zerocopy_threshold": 0, 00:46:15.329 "tls_version": 0, 00:46:15.329 "enable_ktls": false 00:46:15.329 } 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "method": "sock_impl_set_options", 00:46:15.329 "params": { 00:46:15.329 "impl_name": "posix", 00:46:15.329 "recv_buf_size": 2097152, 00:46:15.329 "send_buf_size": 2097152, 00:46:15.329 "enable_recv_pipe": true, 00:46:15.329 "enable_quickack": false, 00:46:15.329 "enable_placement_id": 0, 00:46:15.329 "enable_zerocopy_send_server": true, 00:46:15.329 "enable_zerocopy_send_client": false, 00:46:15.329 "zerocopy_threshold": 0, 00:46:15.329 "tls_version": 0, 00:46:15.329 "enable_ktls": false 00:46:15.329 } 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "method": "sock_impl_set_options", 00:46:15.329 "params": { 00:46:15.329 "impl_name": "uring", 00:46:15.329 "recv_buf_size": 2097152, 00:46:15.329 "send_buf_size": 2097152, 00:46:15.329 "enable_recv_pipe": true, 00:46:15.329 "enable_quickack": false, 00:46:15.329 "enable_placement_id": 0, 00:46:15.329 "enable_zerocopy_send_server": false, 00:46:15.329 "enable_zerocopy_send_client": false, 00:46:15.329 "zerocopy_threshold": 0, 00:46:15.329 "tls_version": 0, 00:46:15.329 "enable_ktls": false 00:46:15.329 } 00:46:15.329 } 00:46:15.329 ] 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "subsystem": "vmd", 00:46:15.329 "config": [] 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "subsystem": "accel", 00:46:15.329 "config": [ 00:46:15.329 { 00:46:15.329 "method": "accel_set_options", 00:46:15.329 "params": { 00:46:15.329 "small_cache_size": 128, 00:46:15.329 "large_cache_size": 16, 00:46:15.329 "task_count": 2048, 00:46:15.329 "sequence_count": 2048, 00:46:15.329 "buf_count": 2048 00:46:15.329 } 00:46:15.329 } 00:46:15.329 ] 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "subsystem": "bdev", 00:46:15.329 "config": [ 00:46:15.329 { 00:46:15.329 "method": "bdev_set_options", 00:46:15.329 "params": { 00:46:15.329 "bdev_io_pool_size": 65535, 00:46:15.329 "bdev_io_cache_size": 256, 00:46:15.329 "bdev_auto_examine": true, 00:46:15.329 "iobuf_small_cache_size": 128, 00:46:15.329 "iobuf_large_cache_size": 16 00:46:15.329 } 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "method": "bdev_raid_set_options", 00:46:15.329 "params": { 00:46:15.329 "process_window_size_kb": 1024, 00:46:15.329 "process_max_bandwidth_mb_sec": 0 00:46:15.329 } 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "method": "bdev_iscsi_set_options", 00:46:15.329 "params": { 00:46:15.329 "timeout_sec": 30 00:46:15.329 } 00:46:15.329 }, 00:46:15.329 { 00:46:15.329 "method": "bdev_nvme_set_options", 00:46:15.329 "params": { 00:46:15.329 "action_on_timeout": "none", 00:46:15.329 "timeout_us": 0, 00:46:15.329 "timeout_admin_us": 0, 00:46:15.329 "keep_alive_timeout_ms": 10000, 00:46:15.329 "arbitration_burst": 0, 00:46:15.329 "low_priority_weight": 0, 00:46:15.329 "medium_priority_weight": 0, 00:46:15.329 "high_priority_weight": 0, 00:46:15.329 "nvme_adminq_poll_period_us": 10000, 00:46:15.329 "nvme_ioq_poll_period_us": 0, 00:46:15.329 "io_queue_requests": 0, 00:46:15.329 "delay_cmd_submit": true, 00:46:15.329 "transport_retry_count": 4, 00:46:15.329 "bdev_retry_count": 3, 00:46:15.329 "transport_ack_timeout": 0, 00:46:15.329 "ctrlr_loss_timeout_sec": 0, 00:46:15.330 "reconnect_delay_sec": 0, 00:46:15.330 "fast_io_fail_timeout_sec": 0, 00:46:15.330 "disable_auto_failback": false, 00:46:15.330 "generate_uuids": false, 00:46:15.330 "transport_tos": 0, 00:46:15.330 "nvme_error_stat": false, 00:46:15.330 "rdma_srq_size": 0, 00:46:15.330 "io_path_stat": false, 00:46:15.330 "allow_accel_sequence": false, 00:46:15.330 "rdma_max_cq_size": 0, 00:46:15.330 "rdma_cm_event_timeout_ms": 0, 00:46:15.330 "dhchap_digests": [ 00:46:15.330 "sha256", 00:46:15.330 "sha384", 00:46:15.330 "sha512" 00:46:15.330 ], 00:46:15.330 "dhchap_dhgroups": [ 00:46:15.330 "null", 00:46:15.330 "ffdhe2048", 00:46:15.330 "ffdhe3072", 00:46:15.330 "ffdhe4096", 00:46:15.330 "ffdhe6144", 00:46:15.330 "ffdhe8192" 00:46:15.330 ] 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "bdev_nvme_set_hotplug", 00:46:15.330 "params": { 00:46:15.330 "period_us": 100000, 00:46:15.330 "enable": false 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "bdev_malloc_create", 00:46:15.330 "params": { 00:46:15.330 "name": "malloc0", 00:46:15.330 "num_blocks": 8192, 00:46:15.330 "block_size": 4096, 00:46:15.330 "physical_block_size": 4096, 00:46:15.330 "uuid": "a8749477-f948-4721-b198-a69276a402b2", 00:46:15.330 "optimal_io_boundary": 0, 00:46:15.330 "md_size": 0, 00:46:15.330 "dif_type": 0, 00:46:15.330 "dif_is_head_of_md": false, 00:46:15.330 "dif_pi_format": 0 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "bdev_wait_for_examine" 00:46:15.330 } 00:46:15.330 ] 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "subsystem": "nbd", 00:46:15.330 "config": [] 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "subsystem": "scheduler", 00:46:15.330 "config": [ 00:46:15.330 { 00:46:15.330 "method": "framework_set_scheduler", 00:46:15.330 "params": { 00:46:15.330 "name": "static" 00:46:15.330 } 00:46:15.330 } 00:46:15.330 ] 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "subsystem": "nvmf", 00:46:15.330 "config": [ 00:46:15.330 { 00:46:15.330 "method": "nvmf_set_config", 00:46:15.330 "params": { 00:46:15.330 "discovery_filter": "match_any", 00:46:15.330 "admin_cmd_passthru": { 00:46:15.330 "identify_ctrlr": false 00:46:15.330 }, 00:46:15.330 "dhchap_digests": [ 00:46:15.330 "sha256", 00:46:15.330 "sha384", 00:46:15.330 "sha512" 00:46:15.330 ], 00:46:15.330 "dhchap_dhgroups": [ 00:46:15.330 "null", 00:46:15.330 "ffdhe2048", 00:46:15.330 "ffdhe3072", 00:46:15.330 "ffdhe4096", 00:46:15.330 "ffdhe6144", 00:46:15.330 "ffdhe8192" 00:46:15.330 ] 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_set_max_subsystems", 00:46:15.330 "params": { 00:46:15.330 "max_subsystems": 1024 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_set_crdt", 00:46:15.330 "params": { 00:46:15.330 "crdt1": 0, 00:46:15.330 "crdt2": 0, 00:46:15.330 "crdt3": 0 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_create_transport", 00:46:15.330 "params": { 00:46:15.330 "trtype": "TCP", 00:46:15.330 "max_queue_depth": 128, 00:46:15.330 "max_io_qpairs_per_ctrlr": 127, 00:46:15.330 "in_capsule_data_size": 4096, 00:46:15.330 "max_io_size": 131072, 00:46:15.330 "io_unit_size": 131072, 00:46:15.330 "max_aq_depth": 128, 00:46:15.330 "num_shared_buffers": 511, 00:46:15.330 "buf_cache_size": 4294967295, 00:46:15.330 "dif_insert_or_strip": false, 00:46:15.330 "zcopy": false, 00:46:15.330 "c2h_success": false, 00:46:15.330 "sock_priority": 0, 00:46:15.330 "abort_timeout_sec": 1, 00:46:15.330 "ack_timeout": 0, 00:46:15.330 "data_wr_pool_size": 0 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_create_subsystem", 00:46:15.330 "params": { 00:46:15.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:15.330 "allow_any_host": false, 00:46:15.330 "serial_number": "SPDK00000000000001", 00:46:15.330 "model_number": "SPDK bdev Controller", 00:46:15.330 "max_namespaces": 10, 00:46:15.330 "min_cntlid": 1, 00:46:15.330 "max_cntlid": 65519, 00:46:15.330 "ana_reporting": false 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_subsystem_add_host", 00:46:15.330 "params": { 00:46:15.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:15.330 "host": "nqn.2016-06.io.spdk:host1", 00:46:15.330 "psk": "key0" 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_subsystem_add_ns", 00:46:15.330 "params": { 00:46:15.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:15.330 "namespace": { 00:46:15.330 "nsid": 1, 00:46:15.330 "bdev_name": "malloc0", 00:46:15.330 "nguid": "A8749477F9484721B198A69276A402B2", 00:46:15.330 "uuid": "a8749477-f948-4721-b198-a69276a402b2", 00:46:15.330 "no_auto_visible": false 00:46:15.330 } 00:46:15.330 } 00:46:15.330 }, 00:46:15.330 { 00:46:15.330 "method": "nvmf_subsystem_add_listener", 00:46:15.330 "params": { 00:46:15.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:15.331 "listen_address": { 00:46:15.331 "trtype": "TCP", 00:46:15.331 "adrfam": "IPv4", 00:46:15.331 "traddr": "10.0.0.3", 00:46:15.331 "trsvcid": "4420" 00:46:15.331 }, 00:46:15.331 "secure_channel": true 00:46:15.331 } 00:46:15.331 } 00:46:15.331 ] 00:46:15.331 } 00:46:15.331 ] 00:46:15.331 }' 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71880 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71880 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71880 ']' 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:15.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:15.331 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:15.588 [2024-12-09 05:37:02.298347] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:15.588 [2024-12-09 05:37:02.298474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:15.588 [2024-12-09 05:37:02.450211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:15.588 [2024-12-09 05:37:02.477435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:15.588 [2024-12-09 05:37:02.477499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:15.588 [2024-12-09 05:37:02.477524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:15.588 [2024-12-09 05:37:02.477532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:15.588 [2024-12-09 05:37:02.477538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:15.588 [2024-12-09 05:37:02.477888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:15.847 [2024-12-09 05:37:02.617523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:15.847 [2024-12-09 05:37:02.672870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:15.847 [2024-12-09 05:37:02.704826] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:15.847 [2024-12-09 05:37:02.705018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:16.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71912 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71912 /var/tmp/bdevperf.sock 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71912 ']' 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:46:16.415 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:46:16.415 "subsystems": [ 00:46:16.415 { 00:46:16.415 "subsystem": "keyring", 00:46:16.415 "config": [ 00:46:16.415 { 00:46:16.415 "method": "keyring_file_add_key", 00:46:16.415 "params": { 00:46:16.415 "name": "key0", 00:46:16.415 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:16.415 } 00:46:16.415 } 00:46:16.415 ] 00:46:16.415 }, 00:46:16.415 { 00:46:16.415 "subsystem": "iobuf", 00:46:16.415 "config": [ 00:46:16.415 { 00:46:16.415 "method": "iobuf_set_options", 00:46:16.415 "params": { 00:46:16.415 "small_pool_count": 8192, 00:46:16.415 "large_pool_count": 1024, 00:46:16.415 "small_bufsize": 8192, 00:46:16.415 "large_bufsize": 135168, 00:46:16.415 "enable_numa": false 00:46:16.415 } 00:46:16.415 } 00:46:16.415 ] 00:46:16.415 }, 00:46:16.415 { 00:46:16.415 "subsystem": "sock", 00:46:16.415 "config": [ 00:46:16.415 { 00:46:16.415 "method": "sock_set_default_impl", 00:46:16.415 "params": { 00:46:16.415 "impl_name": "uring" 00:46:16.415 } 00:46:16.415 }, 00:46:16.415 { 00:46:16.415 "method": "sock_impl_set_options", 00:46:16.415 "params": { 00:46:16.415 "impl_name": "ssl", 00:46:16.415 "recv_buf_size": 4096, 00:46:16.415 "send_buf_size": 4096, 00:46:16.415 "enable_recv_pipe": true, 00:46:16.415 "enable_quickack": false, 00:46:16.415 "enable_placement_id": 0, 00:46:16.415 "enable_zerocopy_send_server": true, 00:46:16.415 "enable_zerocopy_send_client": false, 00:46:16.415 "zerocopy_threshold": 0, 00:46:16.415 "tls_version": 0, 00:46:16.415 "enable_ktls": false 00:46:16.415 } 00:46:16.415 }, 00:46:16.415 { 00:46:16.415 "method": "sock_impl_set_options", 00:46:16.415 "params": { 00:46:16.415 "impl_name": "posix", 00:46:16.415 "recv_buf_size": 2097152, 00:46:16.415 "send_buf_size": 2097152, 00:46:16.415 "enable_recv_pipe": true, 00:46:16.415 "enable_quickack": false, 00:46:16.415 "enable_placement_id": 0, 00:46:16.415 "enable_zerocopy_send_server": true, 00:46:16.415 "enable_zerocopy_send_client": false, 00:46:16.415 "zerocopy_threshold": 0, 00:46:16.415 "tls_version": 0, 00:46:16.415 "enable_ktls": false 00:46:16.415 } 00:46:16.415 }, 00:46:16.415 { 00:46:16.415 "method": "sock_impl_set_options", 00:46:16.415 "params": { 00:46:16.415 "impl_name": "uring", 00:46:16.415 "recv_buf_size": 2097152, 00:46:16.415 "send_buf_size": 2097152, 00:46:16.415 "enable_recv_pipe": true, 00:46:16.415 "enable_quickack": false, 00:46:16.415 "enable_placement_id": 0, 00:46:16.415 "enable_zerocopy_send_server": false, 00:46:16.415 "enable_zerocopy_send_client": false, 00:46:16.415 "zerocopy_threshold": 0, 00:46:16.415 "tls_version": 0, 00:46:16.415 "enable_ktls": false 00:46:16.415 } 00:46:16.415 } 00:46:16.415 ] 00:46:16.415 }, 00:46:16.415 { 00:46:16.415 "subsystem": "vmd", 00:46:16.416 "config": [] 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "subsystem": "accel", 00:46:16.416 "config": [ 00:46:16.416 { 00:46:16.416 "method": "accel_set_options", 00:46:16.416 "params": { 00:46:16.416 "small_cache_size": 128, 00:46:16.416 "large_cache_size": 16, 00:46:16.416 "task_count": 2048, 00:46:16.416 "sequence_count": 2048, 00:46:16.416 "buf_count": 2048 00:46:16.416 } 00:46:16.416 } 00:46:16.416 ] 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "subsystem": "bdev", 00:46:16.416 "config": [ 00:46:16.416 { 00:46:16.416 "method": "bdev_set_options", 00:46:16.416 "params": { 00:46:16.416 "bdev_io_pool_size": 65535, 00:46:16.416 "bdev_io_cache_size": 256, 00:46:16.416 "bdev_auto_examine": true, 00:46:16.416 "iobuf_small_cache_size": 128, 00:46:16.416 "iobuf_large_cache_size": 16 00:46:16.416 } 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "method": "bdev_raid_set_options", 00:46:16.416 "params": { 00:46:16.416 "process_window_size_kb": 1024, 00:46:16.416 "process_max_bandwidth_mb_sec": 0 00:46:16.416 } 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "method": "bdev_iscsi_set_options", 00:46:16.416 "params": { 00:46:16.416 "timeout_sec": 30 00:46:16.416 } 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "method": "bdev_nvme_set_options", 00:46:16.416 "params": { 00:46:16.416 "action_on_timeout": "none", 00:46:16.416 "timeout_us": 0, 00:46:16.416 "timeout_admin_us": 0, 00:46:16.416 "keep_alive_timeout_ms": 10000, 00:46:16.416 "arbitration_burst": 0, 00:46:16.416 "low_priority_weight": 0, 00:46:16.416 "medium_priority_weight": 0, 00:46:16.416 "high_priority_weight": 0, 00:46:16.416 "nvme_adminq_poll_period_us": 10000, 00:46:16.416 "nvme_ioq_poll_period_us": 0, 00:46:16.416 "io_queue_requests": 512, 00:46:16.416 "delay_cmd_submit": true, 00:46:16.416 "transport_retry_count": 4, 00:46:16.416 "bdev_retry_count": 3, 00:46:16.416 "transport_ack_timeout": 0, 00:46:16.416 "ctrlr_loss_timeout_sec": 0, 00:46:16.416 "reconnect_delay_sec": 0, 00:46:16.416 "fast_io_fail_timeout_sec": 0, 00:46:16.416 "disable_auto_failback": false, 00:46:16.416 "generate_uuids": false, 00:46:16.416 "transport_tos": 0, 00:46:16.416 "nvme_error_stat": false, 00:46:16.416 "rdma_srq_size": 0, 00:46:16.416 "io_path_stat": false, 00:46:16.416 "allow_accel_sequence": false, 00:46:16.416 "rdma_max_cq_size": 0, 00:46:16.416 "rdma_cm_event_timeout_ms": 0, 00:46:16.416 "dhchap_digests": [ 00:46:16.416 "sha256", 00:46:16.416 "sha384", 00:46:16.416 "sha512" 00:46:16.416 ], 00:46:16.416 "dhchap_dhgroups": [ 00:46:16.416 "null", 00:46:16.416 "ffdhe2048", 00:46:16.416 "ffdhe3072", 00:46:16.416 "ffdhe4096", 00:46:16.416 "ffdhe6144", 00:46:16.416 "ffdhe8192" 00:46:16.416 ] 00:46:16.416 } 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "method": "bdev_nvme_attach_controller", 00:46:16.416 "params": { 00:46:16.416 "name": "TLSTEST", 00:46:16.416 "trtype": "TCP", 00:46:16.416 "adrfam": "IPv4", 00:46:16.416 "traddr": "10.0.0.3", 00:46:16.416 "trsvcid": "4420", 00:46:16.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:16.416 "prchk_reftag": false, 00:46:16.416 "prchk_guard": false, 00:46:16.416 "ctrlr_loss_timeout_sec": 0, 00:46:16.416 "reconnect_delay_sec": 0, 00:46:16.416 "fast_io_fail_timeout_sec": 0, 00:46:16.416 "psk": "key0", 00:46:16.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:16.416 "hdgst": false, 00:46:16.416 "ddgst": false, 00:46:16.416 "multipath": "multipath" 00:46:16.416 } 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "method": "bdev_nvme_set_hotplug", 00:46:16.416 "params": { 00:46:16.416 "period_us": 100000, 00:46:16.416 "enable": false 00:46:16.416 } 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "method": "bdev_wait_for_examine" 00:46:16.416 } 00:46:16.416 ] 00:46:16.416 }, 00:46:16.416 { 00:46:16.416 "subsystem": "nbd", 00:46:16.416 "config": [] 00:46:16.416 } 00:46:16.416 ] 00:46:16.416 }' 00:46:16.416 [2024-12-09 05:37:03.300508] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:16.416 [2024-12-09 05:37:03.300642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71912 ] 00:46:16.675 [2024-12-09 05:37:03.458076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.675 [2024-12-09 05:37:03.497039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:16.675 [2024-12-09 05:37:03.609013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:16.675 [2024-12-09 05:37:03.640505] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:17.611 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:17.611 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:17.611 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:17.612 Running I/O for 10 seconds... 00:46:19.482 4736.00 IOPS, 18.50 MiB/s [2024-12-09T05:37:07.388Z] 4795.00 IOPS, 18.73 MiB/s [2024-12-09T05:37:08.322Z] 4814.00 IOPS, 18.80 MiB/s [2024-12-09T05:37:09.711Z] 4825.50 IOPS, 18.85 MiB/s [2024-12-09T05:37:10.322Z] 4835.20 IOPS, 18.89 MiB/s [2024-12-09T05:37:11.699Z] 4835.00 IOPS, 18.89 MiB/s [2024-12-09T05:37:12.636Z] 4841.14 IOPS, 18.91 MiB/s [2024-12-09T05:37:13.575Z] 4845.25 IOPS, 18.93 MiB/s [2024-12-09T05:37:14.513Z] 4850.67 IOPS, 18.95 MiB/s [2024-12-09T05:37:14.513Z] 4855.20 IOPS, 18.97 MiB/s 00:46:27.541 Latency(us) 00:46:27.542 [2024-12-09T05:37:14.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:27.542 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:27.542 Verification LBA range: start 0x0 length 0x2000 00:46:27.542 TLSTESTn1 : 10.01 4861.05 18.99 0.00 0.00 26287.20 4766.25 20971.52 00:46:27.542 [2024-12-09T05:37:14.514Z] =================================================================================================================== 00:46:27.542 [2024-12-09T05:37:14.514Z] Total : 4861.05 18.99 0.00 0.00 26287.20 4766.25 20971.52 00:46:27.542 { 00:46:27.542 "results": [ 00:46:27.542 { 00:46:27.542 "job": "TLSTESTn1", 00:46:27.542 "core_mask": "0x4", 00:46:27.542 "workload": "verify", 00:46:27.542 "status": "finished", 00:46:27.542 "verify_range": { 00:46:27.542 "start": 0, 00:46:27.542 "length": 8192 00:46:27.542 }, 00:46:27.542 "queue_depth": 128, 00:46:27.542 "io_size": 4096, 00:46:27.542 "runtime": 10.01367, 00:46:27.542 "iops": 4861.054937899891, 00:46:27.542 "mibps": 18.98849585117145, 00:46:27.542 "io_failed": 0, 00:46:27.542 "io_timeout": 0, 00:46:27.542 "avg_latency_us": 26287.200200505373, 00:46:27.542 "min_latency_us": 4766.254545454545, 00:46:27.542 "max_latency_us": 20971.52 00:46:27.542 } 00:46:27.542 ], 00:46:27.542 "core_count": 1 00:46:27.542 } 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71912 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71912 ']' 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71912 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71912 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:27.542 killing process with pid 71912 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71912' 00:46:27.542 Received shutdown signal, test time was about 10.000000 seconds 00:46:27.542 00:46:27.542 Latency(us) 00:46:27.542 [2024-12-09T05:37:14.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:27.542 [2024-12-09T05:37:14.514Z] =================================================================================================================== 00:46:27.542 [2024-12-09T05:37:14.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71912 00:46:27.542 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71912 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71880 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71880 ']' 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71880 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71880 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:27.801 killing process with pid 71880 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71880' 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71880 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71880 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72045 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72045 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72045 ']' 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:27.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:27.801 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:28.060 [2024-12-09 05:37:14.799073] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:28.060 [2024-12-09 05:37:14.799169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:28.060 [2024-12-09 05:37:14.950610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:28.060 [2024-12-09 05:37:14.988040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:28.060 [2024-12-09 05:37:14.988105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:28.060 [2024-12-09 05:37:14.988120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:28.060 [2024-12-09 05:37:14.988131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:28.060 [2024-12-09 05:37:14.988140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:28.060 [2024-12-09 05:37:14.988481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:28.060 [2024-12-09 05:37:15.020897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gvIUu5FAoY 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gvIUu5FAoY 00:46:28.320 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:28.582 [2024-12-09 05:37:15.359841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:28.582 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:28.839 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:46:29.097 [2024-12-09 05:37:15.904178] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:29.097 [2024-12-09 05:37:15.904491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:29.097 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:29.354 malloc0 00:46:29.354 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:29.612 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:29.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72093 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72093 /var/tmp/bdevperf.sock 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72093 ']' 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:29.869 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:30.128 [2024-12-09 05:37:16.885705] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:30.128 [2024-12-09 05:37:16.886006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72093 ] 00:46:30.128 [2024-12-09 05:37:17.042735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.128 [2024-12-09 05:37:17.082855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:30.386 [2024-12-09 05:37:17.116656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:30.952 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:30.952 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:30.952 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:31.209 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:46:31.468 [2024-12-09 05:37:18.243369] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:31.468 nvme0n1 00:46:31.468 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:31.468 Running I/O for 1 seconds... 00:46:32.842 4736.00 IOPS, 18.50 MiB/s 00:46:32.842 Latency(us) 00:46:32.842 [2024-12-09T05:37:19.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:32.842 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:32.842 Verification LBA range: start 0x0 length 0x2000 00:46:32.842 nvme0n1 : 1.02 4749.11 18.55 0.00 0.00 26697.03 6523.81 18230.92 00:46:32.842 [2024-12-09T05:37:19.814Z] =================================================================================================================== 00:46:32.842 [2024-12-09T05:37:19.814Z] Total : 4749.11 18.55 0.00 0.00 26697.03 6523.81 18230.92 00:46:32.842 { 00:46:32.842 "results": [ 00:46:32.842 { 00:46:32.842 "job": "nvme0n1", 00:46:32.842 "core_mask": "0x2", 00:46:32.842 "workload": "verify", 00:46:32.842 "status": "finished", 00:46:32.842 "verify_range": { 00:46:32.842 "start": 0, 00:46:32.842 "length": 8192 00:46:32.842 }, 00:46:32.842 "queue_depth": 128, 00:46:32.842 "io_size": 4096, 00:46:32.842 "runtime": 1.024192, 00:46:32.842 "iops": 4749.109541960883, 00:46:32.842 "mibps": 18.5512091482847, 00:46:32.842 "io_failed": 0, 00:46:32.842 "io_timeout": 0, 00:46:32.842 "avg_latency_us": 26697.03043062201, 00:46:32.842 "min_latency_us": 6523.810909090909, 00:46:32.842 "max_latency_us": 18230.923636363637 00:46:32.842 } 00:46:32.842 ], 00:46:32.842 "core_count": 1 00:46:32.842 } 00:46:32.842 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72093 00:46:32.842 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72093 ']' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72093 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72093 00:46:32.843 killing process with pid 72093 00:46:32.843 Received shutdown signal, test time was about 1.000000 seconds 00:46:32.843 00:46:32.843 Latency(us) 00:46:32.843 [2024-12-09T05:37:19.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:32.843 [2024-12-09T05:37:19.815Z] =================================================================================================================== 00:46:32.843 [2024-12-09T05:37:19.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72093' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72093 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72093 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72045 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72045 ']' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72045 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72045 00:46:32.843 killing process with pid 72045 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72045' 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72045 00:46:32.843 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72045 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72144 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72144 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72144 ']' 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:33.102 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.102 [2024-12-09 05:37:19.940203] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:33.102 [2024-12-09 05:37:19.940457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:33.361 [2024-12-09 05:37:20.087883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:33.361 [2024-12-09 05:37:20.114706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:33.361 [2024-12-09 05:37:20.114758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:33.361 [2024-12-09 05:37:20.114784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:33.361 [2024-12-09 05:37:20.114791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:33.361 [2024-12-09 05:37:20.114798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:33.361 [2024-12-09 05:37:20.115050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:33.361 [2024-12-09 05:37:20.141540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.361 [2024-12-09 05:37:20.230756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:33.361 malloc0 00:46:33.361 [2024-12-09 05:37:20.256465] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:33.361 [2024-12-09 05:37:20.256682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:33.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72163 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72163 /var/tmp/bdevperf.sock 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72163 ']' 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:33.361 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:33.362 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.620 [2024-12-09 05:37:20.344068] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:33.620 [2024-12-09 05:37:20.344335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:46:33.620 [2024-12-09 05:37:20.494667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:33.620 [2024-12-09 05:37:20.523453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:33.620 [2024-12-09 05:37:20.551656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:33.879 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:33.879 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:33.879 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gvIUu5FAoY 00:46:33.879 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:46:34.138 [2024-12-09 05:37:21.081953] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:34.397 nvme0n1 00:46:34.397 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:34.397 Running I/O for 1 seconds... 00:46:35.591 4608.00 IOPS, 18.00 MiB/s 00:46:35.591 Latency(us) 00:46:35.591 [2024-12-09T05:37:22.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:35.591 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:35.591 Verification LBA range: start 0x0 length 0x2000 00:46:35.591 nvme0n1 : 1.02 4655.28 18.18 0.00 0.00 27223.12 6166.34 17039.36 00:46:35.591 [2024-12-09T05:37:22.563Z] =================================================================================================================== 00:46:35.591 [2024-12-09T05:37:22.563Z] Total : 4655.28 18.18 0.00 0.00 27223.12 6166.34 17039.36 00:46:35.591 { 00:46:35.591 "results": [ 00:46:35.591 { 00:46:35.591 "job": "nvme0n1", 00:46:35.591 "core_mask": "0x2", 00:46:35.592 "workload": "verify", 00:46:35.592 "status": "finished", 00:46:35.592 "verify_range": { 00:46:35.592 "start": 0, 00:46:35.592 "length": 8192 00:46:35.592 }, 00:46:35.592 "queue_depth": 128, 00:46:35.592 "io_size": 4096, 00:46:35.592 "runtime": 1.01734, 00:46:35.592 "iops": 4655.277488351977, 00:46:35.592 "mibps": 18.18467768887491, 00:46:35.592 "io_failed": 0, 00:46:35.592 "io_timeout": 0, 00:46:35.592 "avg_latency_us": 27223.118624078626, 00:46:35.592 "min_latency_us": 6166.341818181818, 00:46:35.592 "max_latency_us": 17039.36 00:46:35.592 } 00:46:35.592 ], 00:46:35.592 "core_count": 1 00:46:35.592 } 00:46:35.592 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:46:35.592 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.592 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:35.592 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.592 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:46:35.592 "subsystems": [ 00:46:35.592 { 00:46:35.592 "subsystem": "keyring", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "keyring_file_add_key", 00:46:35.592 "params": { 00:46:35.592 "name": "key0", 00:46:35.592 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:35.592 } 00:46:35.592 } 00:46:35.592 ] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "iobuf", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "iobuf_set_options", 00:46:35.592 "params": { 00:46:35.592 "small_pool_count": 8192, 00:46:35.592 "large_pool_count": 1024, 00:46:35.592 "small_bufsize": 8192, 00:46:35.592 "large_bufsize": 135168, 00:46:35.592 "enable_numa": false 00:46:35.592 } 00:46:35.592 } 00:46:35.592 ] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "sock", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "sock_set_default_impl", 00:46:35.592 "params": { 00:46:35.592 "impl_name": "uring" 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "sock_impl_set_options", 00:46:35.592 "params": { 00:46:35.592 "impl_name": "ssl", 00:46:35.592 "recv_buf_size": 4096, 00:46:35.592 "send_buf_size": 4096, 00:46:35.592 "enable_recv_pipe": true, 00:46:35.592 "enable_quickack": false, 00:46:35.592 "enable_placement_id": 0, 00:46:35.592 "enable_zerocopy_send_server": true, 00:46:35.592 "enable_zerocopy_send_client": false, 00:46:35.592 "zerocopy_threshold": 0, 00:46:35.592 "tls_version": 0, 00:46:35.592 "enable_ktls": false 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "sock_impl_set_options", 00:46:35.592 "params": { 00:46:35.592 "impl_name": "posix", 00:46:35.592 "recv_buf_size": 2097152, 00:46:35.592 "send_buf_size": 2097152, 00:46:35.592 "enable_recv_pipe": true, 00:46:35.592 "enable_quickack": false, 00:46:35.592 "enable_placement_id": 0, 00:46:35.592 "enable_zerocopy_send_server": true, 00:46:35.592 "enable_zerocopy_send_client": false, 00:46:35.592 "zerocopy_threshold": 0, 00:46:35.592 "tls_version": 0, 00:46:35.592 "enable_ktls": false 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "sock_impl_set_options", 00:46:35.592 "params": { 00:46:35.592 "impl_name": "uring", 00:46:35.592 "recv_buf_size": 2097152, 00:46:35.592 "send_buf_size": 2097152, 00:46:35.592 "enable_recv_pipe": true, 00:46:35.592 "enable_quickack": false, 00:46:35.592 "enable_placement_id": 0, 00:46:35.592 "enable_zerocopy_send_server": false, 00:46:35.592 "enable_zerocopy_send_client": false, 00:46:35.592 "zerocopy_threshold": 0, 00:46:35.592 "tls_version": 0, 00:46:35.592 "enable_ktls": false 00:46:35.592 } 00:46:35.592 } 00:46:35.592 ] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "vmd", 00:46:35.592 "config": [] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "accel", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "accel_set_options", 00:46:35.592 "params": { 00:46:35.592 "small_cache_size": 128, 00:46:35.592 "large_cache_size": 16, 00:46:35.592 "task_count": 2048, 00:46:35.592 "sequence_count": 2048, 00:46:35.592 "buf_count": 2048 00:46:35.592 } 00:46:35.592 } 00:46:35.592 ] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "bdev", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "bdev_set_options", 00:46:35.592 "params": { 00:46:35.592 "bdev_io_pool_size": 65535, 00:46:35.592 "bdev_io_cache_size": 256, 00:46:35.592 "bdev_auto_examine": true, 00:46:35.592 "iobuf_small_cache_size": 128, 00:46:35.592 "iobuf_large_cache_size": 16 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "bdev_raid_set_options", 00:46:35.592 "params": { 00:46:35.592 "process_window_size_kb": 1024, 00:46:35.592 "process_max_bandwidth_mb_sec": 0 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "bdev_iscsi_set_options", 00:46:35.592 "params": { 00:46:35.592 "timeout_sec": 30 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "bdev_nvme_set_options", 00:46:35.592 "params": { 00:46:35.592 "action_on_timeout": "none", 00:46:35.592 "timeout_us": 0, 00:46:35.592 "timeout_admin_us": 0, 00:46:35.592 "keep_alive_timeout_ms": 10000, 00:46:35.592 "arbitration_burst": 0, 00:46:35.592 "low_priority_weight": 0, 00:46:35.592 "medium_priority_weight": 0, 00:46:35.592 "high_priority_weight": 0, 00:46:35.592 "nvme_adminq_poll_period_us": 10000, 00:46:35.592 "nvme_ioq_poll_period_us": 0, 00:46:35.592 "io_queue_requests": 0, 00:46:35.592 "delay_cmd_submit": true, 00:46:35.592 "transport_retry_count": 4, 00:46:35.592 "bdev_retry_count": 3, 00:46:35.592 "transport_ack_timeout": 0, 00:46:35.592 "ctrlr_loss_timeout_sec": 0, 00:46:35.592 "reconnect_delay_sec": 0, 00:46:35.592 "fast_io_fail_timeout_sec": 0, 00:46:35.592 "disable_auto_failback": false, 00:46:35.592 "generate_uuids": false, 00:46:35.592 "transport_tos": 0, 00:46:35.592 "nvme_error_stat": false, 00:46:35.592 "rdma_srq_size": 0, 00:46:35.592 "io_path_stat": false, 00:46:35.592 "allow_accel_sequence": false, 00:46:35.592 "rdma_max_cq_size": 0, 00:46:35.592 "rdma_cm_event_timeout_ms": 0, 00:46:35.592 "dhchap_digests": [ 00:46:35.592 "sha256", 00:46:35.592 "sha384", 00:46:35.592 "sha512" 00:46:35.592 ], 00:46:35.592 "dhchap_dhgroups": [ 00:46:35.592 "null", 00:46:35.592 "ffdhe2048", 00:46:35.592 "ffdhe3072", 00:46:35.592 "ffdhe4096", 00:46:35.592 "ffdhe6144", 00:46:35.592 "ffdhe8192" 00:46:35.592 ] 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "bdev_nvme_set_hotplug", 00:46:35.592 "params": { 00:46:35.592 "period_us": 100000, 00:46:35.592 "enable": false 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "bdev_malloc_create", 00:46:35.592 "params": { 00:46:35.592 "name": "malloc0", 00:46:35.592 "num_blocks": 8192, 00:46:35.592 "block_size": 4096, 00:46:35.592 "physical_block_size": 4096, 00:46:35.592 "uuid": "07d82b18-29ff-4fc6-81b8-c18e13d32e31", 00:46:35.592 "optimal_io_boundary": 0, 00:46:35.592 "md_size": 0, 00:46:35.592 "dif_type": 0, 00:46:35.592 "dif_is_head_of_md": false, 00:46:35.592 "dif_pi_format": 0 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "method": "bdev_wait_for_examine" 00:46:35.592 } 00:46:35.592 ] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "nbd", 00:46:35.592 "config": [] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "scheduler", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "framework_set_scheduler", 00:46:35.592 "params": { 00:46:35.592 "name": "static" 00:46:35.592 } 00:46:35.592 } 00:46:35.592 ] 00:46:35.592 }, 00:46:35.592 { 00:46:35.592 "subsystem": "nvmf", 00:46:35.592 "config": [ 00:46:35.592 { 00:46:35.592 "method": "nvmf_set_config", 00:46:35.592 "params": { 00:46:35.592 "discovery_filter": "match_any", 00:46:35.592 "admin_cmd_passthru": { 00:46:35.592 "identify_ctrlr": false 00:46:35.592 }, 00:46:35.592 "dhchap_digests": [ 00:46:35.592 "sha256", 00:46:35.592 "sha384", 00:46:35.592 "sha512" 00:46:35.592 ], 00:46:35.592 "dhchap_dhgroups": [ 00:46:35.592 "null", 00:46:35.592 "ffdhe2048", 00:46:35.592 "ffdhe3072", 00:46:35.592 "ffdhe4096", 00:46:35.592 "ffdhe6144", 00:46:35.592 "ffdhe8192" 00:46:35.592 ] 00:46:35.592 } 00:46:35.592 }, 00:46:35.592 { 00:46:35.593 "method": "nvmf_set_max_subsystems", 00:46:35.593 "params": { 00:46:35.593 "max_subsystems": 1024 00:46:35.593 } 00:46:35.593 }, 00:46:35.593 { 00:46:35.593 "method": "nvmf_set_crdt", 00:46:35.593 "params": { 00:46:35.593 "crdt1": 0, 00:46:35.593 "crdt2": 0, 00:46:35.593 "crdt3": 0 00:46:35.593 } 00:46:35.593 }, 00:46:35.593 { 00:46:35.593 "method": "nvmf_create_transport", 00:46:35.593 "params": { 00:46:35.593 "trtype": "TCP", 00:46:35.593 "max_queue_depth": 128, 00:46:35.593 "max_io_qpairs_per_ctrlr": 127, 00:46:35.593 "in_capsule_data_size": 4096, 00:46:35.593 "max_io_size": 131072, 00:46:35.593 "io_unit_size": 131072, 00:46:35.593 "max_aq_depth": 128, 00:46:35.593 "num_shared_buffers": 511, 00:46:35.593 "buf_cache_size": 4294967295, 00:46:35.593 "dif_insert_or_strip": false, 00:46:35.593 "zcopy": false, 00:46:35.593 "c2h_success": false, 00:46:35.593 "sock_priority": 0, 00:46:35.593 "abort_timeout_sec": 1, 00:46:35.593 "ack_timeout": 0, 00:46:35.593 "data_wr_pool_size": 0 00:46:35.593 } 00:46:35.593 }, 00:46:35.593 { 00:46:35.593 "method": "nvmf_create_subsystem", 00:46:35.593 "params": { 00:46:35.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.593 "allow_any_host": false, 00:46:35.593 "serial_number": "00000000000000000000", 00:46:35.593 "model_number": "SPDK bdev Controller", 00:46:35.593 "max_namespaces": 32, 00:46:35.593 "min_cntlid": 1, 00:46:35.593 "max_cntlid": 65519, 00:46:35.593 "ana_reporting": false 00:46:35.593 } 00:46:35.593 }, 00:46:35.593 { 00:46:35.593 "method": "nvmf_subsystem_add_host", 00:46:35.593 "params": { 00:46:35.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.593 "host": "nqn.2016-06.io.spdk:host1", 00:46:35.593 "psk": "key0" 00:46:35.593 } 00:46:35.593 }, 00:46:35.593 { 00:46:35.593 "method": "nvmf_subsystem_add_ns", 00:46:35.593 "params": { 00:46:35.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.593 "namespace": { 00:46:35.593 "nsid": 1, 00:46:35.593 "bdev_name": "malloc0", 00:46:35.593 "nguid": "07D82B1829FF4FC681B8C18E13D32E31", 00:46:35.593 "uuid": "07d82b18-29ff-4fc6-81b8-c18e13d32e31", 00:46:35.593 "no_auto_visible": false 00:46:35.593 } 00:46:35.593 } 00:46:35.593 }, 00:46:35.593 { 00:46:35.593 "method": "nvmf_subsystem_add_listener", 00:46:35.593 "params": { 00:46:35.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.593 "listen_address": { 00:46:35.593 "trtype": "TCP", 00:46:35.593 "adrfam": "IPv4", 00:46:35.593 "traddr": "10.0.0.3", 00:46:35.593 "trsvcid": "4420" 00:46:35.593 }, 00:46:35.593 "secure_channel": false, 00:46:35.593 "sock_impl": "ssl" 00:46:35.593 } 00:46:35.593 } 00:46:35.593 ] 00:46:35.593 } 00:46:35.593 ] 00:46:35.593 }' 00:46:35.593 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:46:35.851 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:46:35.851 "subsystems": [ 00:46:35.851 { 00:46:35.851 "subsystem": "keyring", 00:46:35.851 "config": [ 00:46:35.851 { 00:46:35.851 "method": "keyring_file_add_key", 00:46:35.851 "params": { 00:46:35.851 "name": "key0", 00:46:35.851 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:35.851 } 00:46:35.851 } 00:46:35.851 ] 00:46:35.851 }, 00:46:35.851 { 00:46:35.851 "subsystem": "iobuf", 00:46:35.851 "config": [ 00:46:35.851 { 00:46:35.851 "method": "iobuf_set_options", 00:46:35.851 "params": { 00:46:35.851 "small_pool_count": 8192, 00:46:35.851 "large_pool_count": 1024, 00:46:35.851 "small_bufsize": 8192, 00:46:35.851 "large_bufsize": 135168, 00:46:35.851 "enable_numa": false 00:46:35.851 } 00:46:35.851 } 00:46:35.851 ] 00:46:35.851 }, 00:46:35.851 { 00:46:35.851 "subsystem": "sock", 00:46:35.851 "config": [ 00:46:35.851 { 00:46:35.851 "method": "sock_set_default_impl", 00:46:35.851 "params": { 00:46:35.851 "impl_name": "uring" 00:46:35.851 } 00:46:35.851 }, 00:46:35.851 { 00:46:35.851 "method": "sock_impl_set_options", 00:46:35.851 "params": { 00:46:35.851 "impl_name": "ssl", 00:46:35.851 "recv_buf_size": 4096, 00:46:35.851 "send_buf_size": 4096, 00:46:35.851 "enable_recv_pipe": true, 00:46:35.851 "enable_quickack": false, 00:46:35.851 "enable_placement_id": 0, 00:46:35.851 "enable_zerocopy_send_server": true, 00:46:35.851 "enable_zerocopy_send_client": false, 00:46:35.851 "zerocopy_threshold": 0, 00:46:35.851 "tls_version": 0, 00:46:35.851 "enable_ktls": false 00:46:35.851 } 00:46:35.851 }, 00:46:35.851 { 00:46:35.851 "method": "sock_impl_set_options", 00:46:35.851 "params": { 00:46:35.851 "impl_name": "posix", 00:46:35.851 "recv_buf_size": 2097152, 00:46:35.851 "send_buf_size": 2097152, 00:46:35.851 "enable_recv_pipe": true, 00:46:35.851 "enable_quickack": false, 00:46:35.851 "enable_placement_id": 0, 00:46:35.851 "enable_zerocopy_send_server": true, 00:46:35.851 "enable_zerocopy_send_client": false, 00:46:35.851 "zerocopy_threshold": 0, 00:46:35.851 "tls_version": 0, 00:46:35.851 "enable_ktls": false 00:46:35.851 } 00:46:35.851 }, 00:46:35.851 { 00:46:35.851 "method": "sock_impl_set_options", 00:46:35.851 "params": { 00:46:35.851 "impl_name": "uring", 00:46:35.851 "recv_buf_size": 2097152, 00:46:35.851 "send_buf_size": 2097152, 00:46:35.851 "enable_recv_pipe": true, 00:46:35.851 "enable_quickack": false, 00:46:35.852 "enable_placement_id": 0, 00:46:35.852 "enable_zerocopy_send_server": false, 00:46:35.852 "enable_zerocopy_send_client": false, 00:46:35.852 "zerocopy_threshold": 0, 00:46:35.852 "tls_version": 0, 00:46:35.852 "enable_ktls": false 00:46:35.852 } 00:46:35.852 } 00:46:35.852 ] 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "subsystem": "vmd", 00:46:35.852 "config": [] 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "subsystem": "accel", 00:46:35.852 "config": [ 00:46:35.852 { 00:46:35.852 "method": "accel_set_options", 00:46:35.852 "params": { 00:46:35.852 "small_cache_size": 128, 00:46:35.852 "large_cache_size": 16, 00:46:35.852 "task_count": 2048, 00:46:35.852 "sequence_count": 2048, 00:46:35.852 "buf_count": 2048 00:46:35.852 } 00:46:35.852 } 00:46:35.852 ] 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "subsystem": "bdev", 00:46:35.852 "config": [ 00:46:35.852 { 00:46:35.852 "method": "bdev_set_options", 00:46:35.852 "params": { 00:46:35.852 "bdev_io_pool_size": 65535, 00:46:35.852 "bdev_io_cache_size": 256, 00:46:35.852 "bdev_auto_examine": true, 00:46:35.852 "iobuf_small_cache_size": 128, 00:46:35.852 "iobuf_large_cache_size": 16 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_raid_set_options", 00:46:35.852 "params": { 00:46:35.852 "process_window_size_kb": 1024, 00:46:35.852 "process_max_bandwidth_mb_sec": 0 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_iscsi_set_options", 00:46:35.852 "params": { 00:46:35.852 "timeout_sec": 30 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_nvme_set_options", 00:46:35.852 "params": { 00:46:35.852 "action_on_timeout": "none", 00:46:35.852 "timeout_us": 0, 00:46:35.852 "timeout_admin_us": 0, 00:46:35.852 "keep_alive_timeout_ms": 10000, 00:46:35.852 "arbitration_burst": 0, 00:46:35.852 "low_priority_weight": 0, 00:46:35.852 "medium_priority_weight": 0, 00:46:35.852 "high_priority_weight": 0, 00:46:35.852 "nvme_adminq_poll_period_us": 10000, 00:46:35.852 "nvme_ioq_poll_period_us": 0, 00:46:35.852 "io_queue_requests": 512, 00:46:35.852 "delay_cmd_submit": true, 00:46:35.852 "transport_retry_count": 4, 00:46:35.852 "bdev_retry_count": 3, 00:46:35.852 "transport_ack_timeout": 0, 00:46:35.852 "ctrlr_loss_timeout_sec": 0, 00:46:35.852 "reconnect_delay_sec": 0, 00:46:35.852 "fast_io_fail_timeout_sec": 0, 00:46:35.852 "disable_auto_failback": false, 00:46:35.852 "generate_uuids": false, 00:46:35.852 "transport_tos": 0, 00:46:35.852 "nvme_error_stat": false, 00:46:35.852 "rdma_srq_size": 0, 00:46:35.852 "io_path_stat": false, 00:46:35.852 "allow_accel_sequence": false, 00:46:35.852 "rdma_max_cq_size": 0, 00:46:35.852 "rdma_cm_event_timeout_ms": 0, 00:46:35.852 "dhchap_digests": [ 00:46:35.852 "sha256", 00:46:35.852 "sha384", 00:46:35.852 "sha512" 00:46:35.852 ], 00:46:35.852 "dhchap_dhgroups": [ 00:46:35.852 "null", 00:46:35.852 "ffdhe2048", 00:46:35.852 "ffdhe3072", 00:46:35.852 "ffdhe4096", 00:46:35.852 "ffdhe6144", 00:46:35.852 "ffdhe8192" 00:46:35.852 ] 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_nvme_attach_controller", 00:46:35.852 "params": { 00:46:35.852 "name": "nvme0", 00:46:35.852 "trtype": "TCP", 00:46:35.852 "adrfam": "IPv4", 00:46:35.852 "traddr": "10.0.0.3", 00:46:35.852 "trsvcid": "4420", 00:46:35.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.852 "prchk_reftag": false, 00:46:35.852 "prchk_guard": false, 00:46:35.852 "ctrlr_loss_timeout_sec": 0, 00:46:35.852 "reconnect_delay_sec": 0, 00:46:35.852 "fast_io_fail_timeout_sec": 0, 00:46:35.852 "psk": "key0", 00:46:35.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:35.852 "hdgst": false, 00:46:35.852 "ddgst": false, 00:46:35.852 "multipath": "multipath" 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_nvme_set_hotplug", 00:46:35.852 "params": { 00:46:35.852 "period_us": 100000, 00:46:35.852 "enable": false 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_enable_histogram", 00:46:35.852 "params": { 00:46:35.852 "name": "nvme0n1", 00:46:35.852 "enable": true 00:46:35.852 } 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "method": "bdev_wait_for_examine" 00:46:35.852 } 00:46:35.852 ] 00:46:35.852 }, 00:46:35.852 { 00:46:35.852 "subsystem": "nbd", 00:46:35.852 "config": [] 00:46:35.852 } 00:46:35.852 ] 00:46:35.852 }' 00:46:35.852 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72163 00:46:35.852 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72163 ']' 00:46:35.852 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72163 00:46:35.852 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:35.852 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:35.852 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72163 00:46:36.111 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:36.111 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:36.111 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72163' 00:46:36.111 killing process with pid 72163 00:46:36.111 Received shutdown signal, test time was about 1.000000 seconds 00:46:36.111 00:46:36.111 Latency(us) 00:46:36.111 [2024-12-09T05:37:23.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:36.111 [2024-12-09T05:37:23.083Z] =================================================================================================================== 00:46:36.111 [2024-12-09T05:37:23.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:36.111 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72163 00:46:36.111 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72163 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72144 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72144 ']' 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72144 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72144 00:46:36.111 killing process with pid 72144 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72144' 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72144 00:46:36.111 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72144 00:46:36.370 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:46:36.370 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:36.370 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:36.370 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:46:36.370 "subsystems": [ 00:46:36.370 { 00:46:36.370 "subsystem": "keyring", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "keyring_file_add_key", 00:46:36.370 "params": { 00:46:36.370 "name": "key0", 00:46:36.370 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:36.370 } 00:46:36.370 } 00:46:36.370 ] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "iobuf", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "iobuf_set_options", 00:46:36.370 "params": { 00:46:36.370 "small_pool_count": 8192, 00:46:36.370 "large_pool_count": 1024, 00:46:36.370 "small_bufsize": 8192, 00:46:36.370 "large_bufsize": 135168, 00:46:36.370 "enable_numa": false 00:46:36.370 } 00:46:36.370 } 00:46:36.370 ] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "sock", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "sock_set_default_impl", 00:46:36.370 "params": { 00:46:36.370 "impl_name": "uring" 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "sock_impl_set_options", 00:46:36.370 "params": { 00:46:36.370 "impl_name": "ssl", 00:46:36.370 "recv_buf_size": 4096, 00:46:36.370 "send_buf_size": 4096, 00:46:36.370 "enable_recv_pipe": true, 00:46:36.370 "enable_quickack": false, 00:46:36.370 "enable_placement_id": 0, 00:46:36.370 "enable_zerocopy_send_server": true, 00:46:36.370 "enable_zerocopy_send_client": false, 00:46:36.370 "zerocopy_threshold": 0, 00:46:36.370 "tls_version": 0, 00:46:36.370 "enable_ktls": false 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "sock_impl_set_options", 00:46:36.370 "params": { 00:46:36.370 "impl_name": "posix", 00:46:36.370 "recv_buf_size": 2097152, 00:46:36.370 "send_buf_size": 2097152, 00:46:36.370 "enable_recv_pipe": true, 00:46:36.370 "enable_quickack": false, 00:46:36.370 "enable_placement_id": 0, 00:46:36.370 "enable_zerocopy_send_server": true, 00:46:36.370 "enable_zerocopy_send_client": false, 00:46:36.370 "zerocopy_threshold": 0, 00:46:36.370 "tls_version": 0, 00:46:36.370 "enable_ktls": false 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "sock_impl_set_options", 00:46:36.370 "params": { 00:46:36.370 "impl_name": "uring", 00:46:36.370 "recv_buf_size": 2097152, 00:46:36.370 "send_buf_size": 2097152, 00:46:36.370 "enable_recv_pipe": true, 00:46:36.370 "enable_quickack": false, 00:46:36.370 "enable_placement_id": 0, 00:46:36.370 "enable_zerocopy_send_server": false, 00:46:36.370 "enable_zerocopy_send_client": false, 00:46:36.370 "zerocopy_threshold": 0, 00:46:36.370 "tls_version": 0, 00:46:36.370 "enable_ktls": false 00:46:36.370 } 00:46:36.370 } 00:46:36.370 ] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "vmd", 00:46:36.370 "config": [] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "accel", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "accel_set_options", 00:46:36.370 "params": { 00:46:36.370 "small_cache_size": 128, 00:46:36.370 "large_cache_size": 16, 00:46:36.370 "task_count": 2048, 00:46:36.370 "sequence_count": 2048, 00:46:36.370 "buf_count": 2048 00:46:36.370 } 00:46:36.370 } 00:46:36.370 ] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "bdev", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "bdev_set_options", 00:46:36.370 "params": { 00:46:36.370 "bdev_io_pool_size": 65535, 00:46:36.370 "bdev_io_cache_size": 256, 00:46:36.370 "bdev_auto_examine": true, 00:46:36.370 "iobuf_small_cache_size": 128, 00:46:36.370 "iobuf_large_cache_size": 16 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "bdev_raid_set_options", 00:46:36.370 "params": { 00:46:36.370 "process_window_size_kb": 1024, 00:46:36.370 "process_max_bandwidth_mb_sec": 0 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "bdev_iscsi_set_options", 00:46:36.370 "params": { 00:46:36.370 "timeout_sec": 30 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "bdev_nvme_set_options", 00:46:36.370 "params": { 00:46:36.370 "action_on_timeout": "none", 00:46:36.370 "timeout_us": 0, 00:46:36.370 "timeout_admin_us": 0, 00:46:36.370 "keep_alive_timeout_ms": 10000, 00:46:36.370 "arbitration_burst": 0, 00:46:36.370 "low_priority_weight": 0, 00:46:36.370 "medium_priority_weight": 0, 00:46:36.370 "high_priority_weight": 0, 00:46:36.370 "nvme_adminq_poll_period_us": 10000, 00:46:36.370 "nvme_ioq_poll_period_us": 0, 00:46:36.370 "io_queue_requests": 0, 00:46:36.370 "delay_cmd_submit": true, 00:46:36.370 "transport_retry_count": 4, 00:46:36.370 "bdev_retry_count": 3, 00:46:36.370 "transport_ack_timeout": 0, 00:46:36.370 "ctrlr_loss_timeout_sec": 0, 00:46:36.370 "reconnect_delay_sec": 0, 00:46:36.370 "fast_io_fail_timeout_sec": 0, 00:46:36.370 "disable_auto_failback": false, 00:46:36.370 "generate_uuids": false, 00:46:36.370 "transport_tos": 0, 00:46:36.370 "nvme_error_stat": false, 00:46:36.370 "rdma_srq_size": 0, 00:46:36.370 "io_path_stat": false, 00:46:36.370 "allow_accel_sequence": false, 00:46:36.370 "rdma_max_cq_size": 0, 00:46:36.370 "rdma_cm_event_timeout_ms": 0, 00:46:36.370 "dhchap_digests": [ 00:46:36.370 "sha256", 00:46:36.370 "sha384", 00:46:36.370 "sha512" 00:46:36.370 ], 00:46:36.370 "dhchap_dhgroups": [ 00:46:36.370 "null", 00:46:36.370 "ffdhe2048", 00:46:36.370 "ffdhe3072", 00:46:36.370 "ffdhe4096", 00:46:36.370 "ffdhe6144", 00:46:36.370 "ffdhe8192" 00:46:36.370 ] 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "bdev_nvme_set_hotplug", 00:46:36.370 "params": { 00:46:36.370 "period_us": 100000, 00:46:36.370 "enable": false 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "bdev_malloc_create", 00:46:36.370 "params": { 00:46:36.370 "name": "malloc0", 00:46:36.370 "num_blocks": 8192, 00:46:36.370 "block_size": 4096, 00:46:36.370 "physical_block_size": 4096, 00:46:36.370 "uuid": "07d82b18-29ff-4fc6-81b8-c18e13d32e31", 00:46:36.370 "optimal_io_boundary": 0, 00:46:36.370 "md_size": 0, 00:46:36.370 "dif_type": 0, 00:46:36.370 "dif_is_head_of_md": false, 00:46:36.370 "dif_pi_format": 0 00:46:36.370 } 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "method": "bdev_wait_for_examine" 00:46:36.370 } 00:46:36.370 ] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "nbd", 00:46:36.370 "config": [] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "scheduler", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "framework_set_scheduler", 00:46:36.370 "params": { 00:46:36.370 "name": "static" 00:46:36.370 } 00:46:36.370 } 00:46:36.370 ] 00:46:36.370 }, 00:46:36.370 { 00:46:36.370 "subsystem": "nvmf", 00:46:36.370 "config": [ 00:46:36.370 { 00:46:36.370 "method": "nvmf_set_config", 00:46:36.370 "params": { 00:46:36.370 "discovery_filter": "match_any", 00:46:36.370 "admin_cmd_passthru": { 00:46:36.370 "identify_ctrlr": false 00:46:36.370 }, 00:46:36.370 "dhchap_digests": [ 00:46:36.370 "sha256", 00:46:36.370 "sha384", 00:46:36.370 "sha512" 00:46:36.370 ], 00:46:36.371 "dhchap_dhgroups": [ 00:46:36.371 "null", 00:46:36.371 "ffdhe2048", 00:46:36.371 "ffdhe3072", 00:46:36.371 "ffdhe4096", 00:46:36.371 "ffdhe6144", 00:46:36.371 "ffdhe8192" 00:46:36.371 ] 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_set_max_subsystems", 00:46:36.371 "params": { 00:46:36.371 "max_subsystems": 1024 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_set_crdt", 00:46:36.371 "params": { 00:46:36.371 "crdt1": 0, 00:46:36.371 "crdt2": 0, 00:46:36.371 "crdt3": 0 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_create_transport", 00:46:36.371 "params": { 00:46:36.371 "trtype": "TCP", 00:46:36.371 "max_queue_depth": 128, 00:46:36.371 "max_io_qpairs_per_ctrlr": 127, 00:46:36.371 "in_capsule_data_size": 4096, 00:46:36.371 "max_io_size": 131072, 00:46:36.371 "io_unit_size": 131072, 00:46:36.371 "max_aq_depth": 128, 00:46:36.371 "num_shared_buffers": 511, 00:46:36.371 "buf_cache_size": 4294967295, 00:46:36.371 "dif_insert_or_strip": false, 00:46:36.371 "zcopy": false, 00:46:36.371 "c2h_success": false, 00:46:36.371 "sock_priority": 0, 00:46:36.371 "abort_timeout_sec": 1, 00:46:36.371 "ack_timeout": 0, 00:46:36.371 "data_wr_pool_size": 0 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_create_subsystem", 00:46:36.371 "params": { 00:46:36.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.371 "allow_any_host": false, 00:46:36.371 "serial_number": "00000000000000000000", 00:46:36.371 "model_number": "SPDK bdev Controller", 00:46:36.371 "max_namespaces": 32, 00:46:36.371 "min_cntlid": 1, 00:46:36.371 "max_cntlid": 65519, 00:46:36.371 "ana_reporting": false 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_subsystem_add_host", 00:46:36.371 "params": { 00:46:36.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.371 "host": "nqn.2016-06.io.spdk:host1", 00:46:36.371 "psk": "key0" 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_subsystem_add_ns", 00:46:36.371 "params": { 00:46:36.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.371 "namespace": { 00:46:36.371 "nsid": 1, 00:46:36.371 "bdev_name": "malloc0", 00:46:36.371 "nguid": "07D82B1829FF4FC681B8C18E13D32E31", 00:46:36.371 "uuid": "07d82b18-29ff-4fc6-81b8-c18e13d32e31", 00:46:36.371 "no_auto_visible": false 00:46:36.371 } 00:46:36.371 } 00:46:36.371 }, 00:46:36.371 { 00:46:36.371 "method": "nvmf_subsystem_add_listener", 00:46:36.371 "params": { 00:46:36.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.371 "listen_address": { 00:46:36.371 "trtype": "TCP", 00:46:36.371 "adrfam": "IPv4", 00:46:36.371 "traddr": "10.0.0.3", 00:46:36.371 "trsvcid": "4420" 00:46:36.371 }, 00:46:36.371 "secure_channel": false, 00:46:36.371 "sock_impl": "ssl" 00:46:36.371 } 00:46:36.371 } 00:46:36.371 ] 00:46:36.371 } 00:46:36.371 ] 00:46:36.371 }' 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72216 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72216 00:46:36.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72216 ']' 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:36.371 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:36.371 [2024-12-09 05:37:23.268608] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:36.371 [2024-12-09 05:37:23.269344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:36.629 [2024-12-09 05:37:23.416602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:36.629 [2024-12-09 05:37:23.442995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:36.629 [2024-12-09 05:37:23.443058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:36.629 [2024-12-09 05:37:23.443084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:36.629 [2024-12-09 05:37:23.443091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:36.629 [2024-12-09 05:37:23.443097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:36.629 [2024-12-09 05:37:23.443402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:36.629 [2024-12-09 05:37:23.581909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:36.887 [2024-12-09 05:37:23.637264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:36.887 [2024-12-09 05:37:23.669225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:36.887 [2024-12-09 05:37:23.669395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72247 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72247 /var/tmp/bdevperf.sock 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72247 ']' 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:37.455 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:46:37.455 "subsystems": [ 00:46:37.455 { 00:46:37.455 "subsystem": "keyring", 00:46:37.455 "config": [ 00:46:37.455 { 00:46:37.455 "method": "keyring_file_add_key", 00:46:37.455 "params": { 00:46:37.455 "name": "key0", 00:46:37.455 "path": "/tmp/tmp.gvIUu5FAoY" 00:46:37.455 } 00:46:37.455 } 00:46:37.455 ] 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "subsystem": "iobuf", 00:46:37.455 "config": [ 00:46:37.455 { 00:46:37.455 "method": "iobuf_set_options", 00:46:37.455 "params": { 00:46:37.455 "small_pool_count": 8192, 00:46:37.455 "large_pool_count": 1024, 00:46:37.455 "small_bufsize": 8192, 00:46:37.455 "large_bufsize": 135168, 00:46:37.455 "enable_numa": false 00:46:37.455 } 00:46:37.455 } 00:46:37.455 ] 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "subsystem": "sock", 00:46:37.455 "config": [ 00:46:37.455 { 00:46:37.455 "method": "sock_set_default_impl", 00:46:37.455 "params": { 00:46:37.455 "impl_name": "uring" 00:46:37.455 } 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "method": "sock_impl_set_options", 00:46:37.455 "params": { 00:46:37.455 "impl_name": "ssl", 00:46:37.455 "recv_buf_size": 4096, 00:46:37.455 "send_buf_size": 4096, 00:46:37.455 "enable_recv_pipe": true, 00:46:37.455 "enable_quickack": false, 00:46:37.455 "enable_placement_id": 0, 00:46:37.455 "enable_zerocopy_send_server": true, 00:46:37.455 "enable_zerocopy_send_client": false, 00:46:37.455 "zerocopy_threshold": 0, 00:46:37.455 "tls_version": 0, 00:46:37.455 "enable_ktls": false 00:46:37.455 } 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "method": "sock_impl_set_options", 00:46:37.455 "params": { 00:46:37.455 "impl_name": "posix", 00:46:37.455 "recv_buf_size": 2097152, 00:46:37.455 "send_buf_size": 2097152, 00:46:37.455 "enable_recv_pipe": true, 00:46:37.455 "enable_quickack": false, 00:46:37.455 "enable_placement_id": 0, 00:46:37.455 "enable_zerocopy_send_server": true, 00:46:37.455 "enable_zerocopy_send_client": false, 00:46:37.455 "zerocopy_threshold": 0, 00:46:37.455 "tls_version": 0, 00:46:37.455 "enable_ktls": false 00:46:37.455 } 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "method": "sock_impl_set_options", 00:46:37.455 "params": { 00:46:37.455 "impl_name": "uring", 00:46:37.455 "recv_buf_size": 2097152, 00:46:37.455 "send_buf_size": 2097152, 00:46:37.455 "enable_recv_pipe": true, 00:46:37.455 "enable_quickack": false, 00:46:37.455 "enable_placement_id": 0, 00:46:37.455 "enable_zerocopy_send_server": false, 00:46:37.455 "enable_zerocopy_send_client": false, 00:46:37.455 "zerocopy_threshold": 0, 00:46:37.455 "tls_version": 0, 00:46:37.455 "enable_ktls": false 00:46:37.455 } 00:46:37.455 } 00:46:37.455 ] 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "subsystem": "vmd", 00:46:37.455 "config": [] 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "subsystem": "accel", 00:46:37.455 "config": [ 00:46:37.455 { 00:46:37.455 "method": "accel_set_options", 00:46:37.455 "params": { 00:46:37.455 "small_cache_size": 128, 00:46:37.455 "large_cache_size": 16, 00:46:37.455 "task_count": 2048, 00:46:37.455 "sequence_count": 2048, 00:46:37.455 "buf_count": 2048 00:46:37.455 } 00:46:37.455 } 00:46:37.455 ] 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "subsystem": "bdev", 00:46:37.455 "config": [ 00:46:37.455 { 00:46:37.455 "method": "bdev_set_options", 00:46:37.455 "params": { 00:46:37.455 "bdev_io_pool_size": 65535, 00:46:37.455 "bdev_io_cache_size": 256, 00:46:37.455 "bdev_auto_examine": true, 00:46:37.455 "iobuf_small_cache_size": 128, 00:46:37.455 "iobuf_large_cache_size": 16 00:46:37.455 } 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "method": "bdev_raid_set_options", 00:46:37.455 "params": { 00:46:37.455 "process_window_size_kb": 1024, 00:46:37.455 "process_max_bandwidth_mb_sec": 0 00:46:37.455 } 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "method": "bdev_iscsi_set_options", 00:46:37.455 "params": { 00:46:37.455 "timeout_sec": 30 00:46:37.455 } 00:46:37.455 }, 00:46:37.455 { 00:46:37.455 "method": "bdev_nvme_set_options", 00:46:37.455 "params": { 00:46:37.456 "action_on_timeout": "none", 00:46:37.456 "timeout_us": 0, 00:46:37.456 "timeout_admin_us": 0, 00:46:37.456 "keep_alive_timeout_ms": 10000, 00:46:37.456 "arbitration_burst": 0, 00:46:37.456 "low_priority_weight": 0, 00:46:37.456 "medium_priority_weight": 0, 00:46:37.456 "high_priority_weight": 0, 00:46:37.456 "nvme_adminq_poll_period_us": 10000, 00:46:37.456 "nvme_ioq_poll_period_us": 0, 00:46:37.456 "io_queue_requests": 512, 00:46:37.456 "delay_cmd_submit": true, 00:46:37.456 "transport_retry_count": 4, 00:46:37.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:37.456 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:37.456 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:37.456 "bdev_retry_count": 3, 00:46:37.456 "transport_ack_timeout": 0, 00:46:37.456 "ctrlr_loss_timeout_sec": 0, 00:46:37.456 "reconnect_delay_sec": 0, 00:46:37.456 "fast_io_fail_timeout_sec": 0, 00:46:37.456 "disable_auto_failback": false, 00:46:37.456 "generate_uuids": false, 00:46:37.456 "transport_tos": 0, 00:46:37.456 "nvme_error_stat": false, 00:46:37.456 "rdma_srq_size": 0, 00:46:37.456 "io_path_stat": false, 00:46:37.456 "allow_accel_sequence": false, 00:46:37.456 "rdma_max_cq_size": 0, 00:46:37.456 "rdma_cm_event_timeout_ms": 0, 00:46:37.456 "dhchap_digests": [ 00:46:37.456 "sha256", 00:46:37.456 "sha384", 00:46:37.456 "sha512" 00:46:37.456 ], 00:46:37.456 "dhchap_dhgroups": [ 00:46:37.456 "null", 00:46:37.456 "ffdhe2048", 00:46:37.456 "ffdhe3072", 00:46:37.456 "ffdhe4096", 00:46:37.456 "ffdhe6144", 00:46:37.456 "ffdhe8192" 00:46:37.456 ] 00:46:37.456 } 00:46:37.456 }, 00:46:37.456 { 00:46:37.456 "method": "bdev_nvme_attach_controller", 00:46:37.456 "params": { 00:46:37.456 "name": "nvme0", 00:46:37.456 "trtype": "TCP", 00:46:37.456 "adrfam": "IPv4", 00:46:37.456 "traddr": "10.0.0.3", 00:46:37.456 "trsvcid": "4420", 00:46:37.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:37.456 "prchk_reftag": false, 00:46:37.456 "prchk_guard": false, 00:46:37.456 "ctrlr_loss_timeout_sec": 0, 00:46:37.456 "reconnect_delay_sec": 0, 00:46:37.456 "fast_io_fail_timeout_sec": 0, 00:46:37.456 "psk": "key0", 00:46:37.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:37.456 "hdgst": false, 00:46:37.456 "ddgst": false, 00:46:37.456 "multipath": "multipath" 00:46:37.456 } 00:46:37.456 }, 00:46:37.456 { 00:46:37.456 "method": "bdev_nvme_set_hotplug", 00:46:37.456 "params": { 00:46:37.456 "period_us": 100000, 00:46:37.456 "enable": false 00:46:37.456 } 00:46:37.456 }, 00:46:37.456 { 00:46:37.456 "method": "bdev_enable_histogram", 00:46:37.456 "params": { 00:46:37.456 "name": "nvme0n1", 00:46:37.456 "enable": true 00:46:37.456 } 00:46:37.456 }, 00:46:37.456 { 00:46:37.456 "method": "bdev_wait_for_examine" 00:46:37.456 } 00:46:37.456 ] 00:46:37.456 }, 00:46:37.456 { 00:46:37.456 "subsystem": "nbd", 00:46:37.456 "config": [] 00:46:37.456 } 00:46:37.456 ] 00:46:37.456 }' 00:46:37.456 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:37.456 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:37.456 [2024-12-09 05:37:24.299590] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:37.456 [2024-12-09 05:37:24.300365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72247 ] 00:46:37.714 [2024-12-09 05:37:24.456281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:37.714 [2024-12-09 05:37:24.495862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:37.714 [2024-12-09 05:37:24.610883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:37.715 [2024-12-09 05:37:24.647062] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:38.290 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:38.290 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:38.290 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:46:38.290 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:46:38.550 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:38.550 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:38.808 Running I/O for 1 seconds... 00:46:39.744 4736.00 IOPS, 18.50 MiB/s 00:46:39.744 Latency(us) 00:46:39.744 [2024-12-09T05:37:26.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:39.744 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:39.744 Verification LBA range: start 0x0 length 0x2000 00:46:39.744 nvme0n1 : 1.02 4775.20 18.65 0.00 0.00 26552.73 6196.13 17754.30 00:46:39.744 [2024-12-09T05:37:26.716Z] =================================================================================================================== 00:46:39.744 [2024-12-09T05:37:26.716Z] Total : 4775.20 18.65 0.00 0.00 26552.73 6196.13 17754.30 00:46:39.744 { 00:46:39.744 "results": [ 00:46:39.744 { 00:46:39.744 "job": "nvme0n1", 00:46:39.744 "core_mask": "0x2", 00:46:39.744 "workload": "verify", 00:46:39.744 "status": "finished", 00:46:39.744 "verify_range": { 00:46:39.744 "start": 0, 00:46:39.744 "length": 8192 00:46:39.744 }, 00:46:39.744 "queue_depth": 128, 00:46:39.744 "io_size": 4096, 00:46:39.744 "runtime": 1.018597, 00:46:39.744 "iops": 4775.195685830608, 00:46:39.744 "mibps": 18.653108147775814, 00:46:39.744 "io_failed": 0, 00:46:39.744 "io_timeout": 0, 00:46:39.744 "avg_latency_us": 26552.727272727272, 00:46:39.744 "min_latency_us": 6196.130909090909, 00:46:39.744 "max_latency_us": 17754.298181818183 00:46:39.744 } 00:46:39.744 ], 00:46:39.744 "core_count": 1 00:46:39.744 } 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:46:39.744 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:46:39.744 nvmf_trace.0 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72247 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72247 ']' 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72247 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72247 00:46:40.003 killing process with pid 72247 00:46:40.003 Received shutdown signal, test time was about 1.000000 seconds 00:46:40.003 00:46:40.003 Latency(us) 00:46:40.003 [2024-12-09T05:37:26.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:40.003 [2024-12-09T05:37:26.975Z] =================================================================================================================== 00:46:40.003 [2024-12-09T05:37:26.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:40.003 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72247' 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72247 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72247 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:40.004 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:40.004 rmmod nvme_tcp 00:46:40.262 rmmod nvme_fabrics 00:46:40.263 rmmod nvme_keyring 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72216 ']' 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72216 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72216 ']' 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72216 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72216 00:46:40.263 killing process with pid 72216 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72216' 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72216 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72216 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:40.263 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FuAGoozGRN /tmp/tmp.lBxTV8yP4i /tmp/tmp.gvIUu5FAoY 00:46:40.522 ************************************ 00:46:40.522 END TEST nvmf_tls 00:46:40.522 ************************************ 00:46:40.522 00:46:40.522 real 1m23.372s 00:46:40.522 user 2m16.794s 00:46:40.522 sys 0m25.814s 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:40.522 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:40.782 ************************************ 00:46:40.782 START TEST nvmf_fips 00:46:40.782 ************************************ 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:46:40.782 * Looking for test storage... 00:46:40.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:46:40.782 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.783 --rc genhtml_branch_coverage=1 00:46:40.783 --rc genhtml_function_coverage=1 00:46:40.783 --rc genhtml_legend=1 00:46:40.783 --rc geninfo_all_blocks=1 00:46:40.783 --rc geninfo_unexecuted_blocks=1 00:46:40.783 00:46:40.783 ' 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.783 --rc genhtml_branch_coverage=1 00:46:40.783 --rc genhtml_function_coverage=1 00:46:40.783 --rc genhtml_legend=1 00:46:40.783 --rc geninfo_all_blocks=1 00:46:40.783 --rc geninfo_unexecuted_blocks=1 00:46:40.783 00:46:40.783 ' 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.783 --rc genhtml_branch_coverage=1 00:46:40.783 --rc genhtml_function_coverage=1 00:46:40.783 --rc genhtml_legend=1 00:46:40.783 --rc geninfo_all_blocks=1 00:46:40.783 --rc geninfo_unexecuted_blocks=1 00:46:40.783 00:46:40.783 ' 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.783 --rc genhtml_branch_coverage=1 00:46:40.783 --rc genhtml_function_coverage=1 00:46:40.783 --rc genhtml_legend=1 00:46:40.783 --rc geninfo_all_blocks=1 00:46:40.783 --rc geninfo_unexecuted_blocks=1 00:46:40.783 00:46:40.783 ' 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.783 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:41.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:41.043 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:46:41.044 Error setting digest 00:46:41.044 40F2CDE0077F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:46:41.044 40F2CDE0077F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:41.044 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:41.045 Cannot find device "nvmf_init_br" 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:41.045 Cannot find device "nvmf_init_br2" 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:41.045 Cannot find device "nvmf_tgt_br" 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:41.045 Cannot find device "nvmf_tgt_br2" 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:41.045 Cannot find device "nvmf_init_br" 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:46:41.045 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:41.045 Cannot find device "nvmf_init_br2" 00:46:41.045 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:46:41.045 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:41.311 Cannot find device "nvmf_tgt_br" 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:41.311 Cannot find device "nvmf_tgt_br2" 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:41.311 Cannot find device "nvmf_br" 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:41.311 Cannot find device "nvmf_init_if" 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:41.311 Cannot find device "nvmf_init_if2" 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:41.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:41.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:41.311 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:41.312 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:41.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:41.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:46:41.571 00:46:41.571 --- 10.0.0.3 ping statistics --- 00:46:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:41.571 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:41.571 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:41.571 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:46:41.571 00:46:41.571 --- 10.0.0.4 ping statistics --- 00:46:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:41.571 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:41.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:41.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:46:41.571 00:46:41.571 --- 10.0.0.1 ping statistics --- 00:46:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:41.571 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:41.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:41.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:46:41.571 00:46:41.571 --- 10.0.0.2 ping statistics --- 00:46:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:41.571 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72562 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72562 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72562 ']' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:41.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:41.571 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:46:41.571 [2024-12-09 05:37:28.448057] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:41.571 [2024-12-09 05:37:28.448148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:41.829 [2024-12-09 05:37:28.606265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.829 [2024-12-09 05:37:28.644687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:41.829 [2024-12-09 05:37:28.644752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:41.829 [2024-12-09 05:37:28.644767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:41.829 [2024-12-09 05:37:28.644777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:41.829 [2024-12-09 05:37:28.644786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:41.829 [2024-12-09 05:37:28.645143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:41.829 [2024-12-09 05:37:28.681626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.p5J 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.p5J 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.p5J 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.p5J 00:46:41.829 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:42.396 [2024-12-09 05:37:29.074406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:42.396 [2024-12-09 05:37:29.090366] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:42.396 [2024-12-09 05:37:29.090600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:42.396 malloc0 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72596 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72596 /var/tmp/bdevperf.sock 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72596 ']' 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:42.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:42.396 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:42.397 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:42.397 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:46:42.397 [2024-12-09 05:37:29.229190] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:42.397 [2024-12-09 05:37:29.229300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:46:42.654 [2024-12-09 05:37:29.384672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:42.654 [2024-12-09 05:37:29.423583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:42.654 [2024-12-09 05:37:29.457154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:43.221 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:43.221 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:46:43.221 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.p5J 00:46:43.512 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:43.771 [2024-12-09 05:37:30.648136] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:43.771 TLSTESTn1 00:46:43.771 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:44.029 Running I/O for 10 seconds... 00:46:45.896 4562.00 IOPS, 17.82 MiB/s [2024-12-09T05:37:34.271Z] 4623.50 IOPS, 18.06 MiB/s [2024-12-09T05:37:34.883Z] 4663.67 IOPS, 18.22 MiB/s [2024-12-09T05:37:36.259Z] 4676.25 IOPS, 18.27 MiB/s [2024-12-09T05:37:37.196Z] 4688.20 IOPS, 18.31 MiB/s [2024-12-09T05:37:38.134Z] 4705.33 IOPS, 18.38 MiB/s [2024-12-09T05:37:39.072Z] 4718.71 IOPS, 18.43 MiB/s [2024-12-09T05:37:40.006Z] 4720.88 IOPS, 18.44 MiB/s [2024-12-09T05:37:40.939Z] 4725.33 IOPS, 18.46 MiB/s [2024-12-09T05:37:40.940Z] 4713.90 IOPS, 18.41 MiB/s 00:46:53.968 Latency(us) 00:46:53.968 [2024-12-09T05:37:40.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:53.968 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:53.968 Verification LBA range: start 0x0 length 0x2000 00:46:53.968 TLSTESTn1 : 10.02 4717.27 18.43 0.00 0.00 27082.28 6464.23 21686.46 00:46:53.968 [2024-12-09T05:37:40.940Z] =================================================================================================================== 00:46:53.968 [2024-12-09T05:37:40.940Z] Total : 4717.27 18.43 0.00 0.00 27082.28 6464.23 21686.46 00:46:53.968 { 00:46:53.968 "results": [ 00:46:53.968 { 00:46:53.968 "job": "TLSTESTn1", 00:46:53.968 "core_mask": "0x4", 00:46:53.968 "workload": "verify", 00:46:53.968 "status": "finished", 00:46:53.968 "verify_range": { 00:46:53.968 "start": 0, 00:46:53.968 "length": 8192 00:46:53.968 }, 00:46:53.968 "queue_depth": 128, 00:46:53.968 "io_size": 4096, 00:46:53.968 "runtime": 10.019346, 00:46:53.968 "iops": 4717.273961793514, 00:46:53.968 "mibps": 18.426851413255914, 00:46:53.968 "io_failed": 0, 00:46:53.968 "io_timeout": 0, 00:46:53.968 "avg_latency_us": 27082.28396134671, 00:46:53.968 "min_latency_us": 6464.232727272727, 00:46:53.968 "max_latency_us": 21686.458181818183 00:46:53.968 } 00:46:53.968 ], 00:46:53.968 "core_count": 1 00:46:53.968 } 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:46:53.968 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:46:53.968 nvmf_trace.0 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72596 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72596 ']' 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72596 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:54.226 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72596 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:54.226 killing process with pid 72596 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72596' 00:46:54.226 Received shutdown signal, test time was about 10.000000 seconds 00:46:54.226 00:46:54.226 Latency(us) 00:46:54.226 [2024-12-09T05:37:41.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:54.226 [2024-12-09T05:37:41.198Z] =================================================================================================================== 00:46:54.226 [2024-12-09T05:37:41.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72596 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72596 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:54.226 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:54.484 rmmod nvme_tcp 00:46:54.484 rmmod nvme_fabrics 00:46:54.484 rmmod nvme_keyring 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72562 ']' 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72562 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72562 ']' 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72562 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72562 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:54.484 killing process with pid 72562 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72562' 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72562 00:46:54.484 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72562 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:54.742 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.p5J 00:46:55.000 00:46:55.000 real 0m14.195s 00:46:55.000 user 0m19.988s 00:46:55.000 sys 0m5.527s 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:46:55.000 ************************************ 00:46:55.000 END TEST nvmf_fips 00:46:55.000 ************************************ 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:55.000 ************************************ 00:46:55.000 START TEST nvmf_control_msg_list 00:46:55.000 ************************************ 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:46:55.000 * Looking for test storage... 00:46:55.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:46:55.000 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:46:55.001 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:46:55.258 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:55.258 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:55.258 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:46:55.258 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:55.258 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.258 --rc genhtml_branch_coverage=1 00:46:55.258 --rc genhtml_function_coverage=1 00:46:55.258 --rc genhtml_legend=1 00:46:55.258 --rc geninfo_all_blocks=1 00:46:55.258 --rc geninfo_unexecuted_blocks=1 00:46:55.258 00:46:55.258 ' 00:46:55.258 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.258 --rc genhtml_branch_coverage=1 00:46:55.258 --rc genhtml_function_coverage=1 00:46:55.258 --rc genhtml_legend=1 00:46:55.258 --rc geninfo_all_blocks=1 00:46:55.258 --rc geninfo_unexecuted_blocks=1 00:46:55.259 00:46:55.259 ' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:55.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.259 --rc genhtml_branch_coverage=1 00:46:55.259 --rc genhtml_function_coverage=1 00:46:55.259 --rc genhtml_legend=1 00:46:55.259 --rc geninfo_all_blocks=1 00:46:55.259 --rc geninfo_unexecuted_blocks=1 00:46:55.259 00:46:55.259 ' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:55.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.259 --rc genhtml_branch_coverage=1 00:46:55.259 --rc genhtml_function_coverage=1 00:46:55.259 --rc genhtml_legend=1 00:46:55.259 --rc geninfo_all_blocks=1 00:46:55.259 --rc geninfo_unexecuted_blocks=1 00:46:55.259 00:46:55.259 ' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:55.259 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:55.259 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:55.260 Cannot find device "nvmf_init_br" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:55.260 Cannot find device "nvmf_init_br2" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:55.260 Cannot find device "nvmf_tgt_br" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:55.260 Cannot find device "nvmf_tgt_br2" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:55.260 Cannot find device "nvmf_init_br" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:55.260 Cannot find device "nvmf_init_br2" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:55.260 Cannot find device "nvmf_tgt_br" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:55.260 Cannot find device "nvmf_tgt_br2" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:55.260 Cannot find device "nvmf_br" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:55.260 Cannot find device "nvmf_init_if" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:55.260 Cannot find device "nvmf_init_if2" 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:55.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:55.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:55.260 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:55.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:55.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:46:55.518 00:46:55.518 --- 10.0.0.3 ping statistics --- 00:46:55.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.518 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:55.518 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:55.518 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:46:55.518 00:46:55.518 --- 10.0.0.4 ping statistics --- 00:46:55.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.518 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:55.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:55.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:46:55.518 00:46:55.518 --- 10.0.0.1 ping statistics --- 00:46:55.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.518 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:55.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:55.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:46:55.518 00:46:55.518 --- 10.0.0.2 ping statistics --- 00:46:55.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.518 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72981 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72981 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72981 ']' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:55.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:55.518 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:55.776 [2024-12-09 05:37:42.490954] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:55.776 [2024-12-09 05:37:42.491034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:55.776 [2024-12-09 05:37:42.645644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:55.776 [2024-12-09 05:37:42.684455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:55.776 [2024-12-09 05:37:42.684515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:55.776 [2024-12-09 05:37:42.684529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:55.776 [2024-12-09 05:37:42.684539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:55.776 [2024-12-09 05:37:42.684548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:55.776 [2024-12-09 05:37:42.684943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:55.776 [2024-12-09 05:37:42.722264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:56.033 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:56.033 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:46:56.033 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:56.033 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:56.034 [2024-12-09 05:37:42.827655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:56.034 Malloc0 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:56.034 [2024-12-09 05:37:42.863657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73011 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73012 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73013 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:46:56.034 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73011 00:46:56.291 [2024-12-09 05:37:43.062117] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:46:56.291 [2024-12-09 05:37:43.062349] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:46:56.291 [2024-12-09 05:37:43.062473] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:46:57.226 Initializing NVMe Controllers 00:46:57.226 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:46:57.226 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:46:57.226 Initialization complete. Launching workers. 00:46:57.226 ======================================================== 00:46:57.226 Latency(us) 00:46:57.226 Device Information : IOPS MiB/s Average min max 00:46:57.226 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3716.00 14.52 268.80 196.62 620.75 00:46:57.226 ======================================================== 00:46:57.226 Total : 3716.00 14.52 268.80 196.62 620.75 00:46:57.226 00:46:57.226 Initializing NVMe Controllers 00:46:57.226 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:46:57.226 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:46:57.226 Initialization complete. Launching workers. 00:46:57.226 ======================================================== 00:46:57.226 Latency(us) 00:46:57.226 Device Information : IOPS MiB/s Average min max 00:46:57.226 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3704.99 14.47 269.62 226.11 1662.12 00:46:57.226 ======================================================== 00:46:57.226 Total : 3704.99 14.47 269.62 226.11 1662.12 00:46:57.226 00:46:57.226 Initializing NVMe Controllers 00:46:57.226 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:46:57.226 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:46:57.226 Initialization complete. Launching workers. 00:46:57.226 ======================================================== 00:46:57.226 Latency(us) 00:46:57.226 Device Information : IOPS MiB/s Average min max 00:46:57.226 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3714.99 14.51 268.86 145.94 483.96 00:46:57.226 ======================================================== 00:46:57.226 Total : 3714.99 14.51 268.86 145.94 483.96 00:46:57.226 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73012 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73013 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:57.226 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:57.226 rmmod nvme_tcp 00:46:57.485 rmmod nvme_fabrics 00:46:57.486 rmmod nvme_keyring 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72981 ']' 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72981 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72981 ']' 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72981 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72981 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:57.486 killing process with pid 72981 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72981' 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72981 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72981 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:57.486 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:46:57.745 00:46:57.745 real 0m2.914s 00:46:57.745 user 0m4.952s 00:46:57.745 sys 0m1.292s 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:57.745 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:46:57.745 ************************************ 00:46:57.745 END TEST nvmf_control_msg_list 00:46:57.745 ************************************ 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:58.005 ************************************ 00:46:58.005 START TEST nvmf_wait_for_buf 00:46:58.005 ************************************ 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:46:58.005 * Looking for test storage... 00:46:58.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:46:58.005 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:58.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.006 --rc genhtml_branch_coverage=1 00:46:58.006 --rc genhtml_function_coverage=1 00:46:58.006 --rc genhtml_legend=1 00:46:58.006 --rc geninfo_all_blocks=1 00:46:58.006 --rc geninfo_unexecuted_blocks=1 00:46:58.006 00:46:58.006 ' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:58.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.006 --rc genhtml_branch_coverage=1 00:46:58.006 --rc genhtml_function_coverage=1 00:46:58.006 --rc genhtml_legend=1 00:46:58.006 --rc geninfo_all_blocks=1 00:46:58.006 --rc geninfo_unexecuted_blocks=1 00:46:58.006 00:46:58.006 ' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:58.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.006 --rc genhtml_branch_coverage=1 00:46:58.006 --rc genhtml_function_coverage=1 00:46:58.006 --rc genhtml_legend=1 00:46:58.006 --rc geninfo_all_blocks=1 00:46:58.006 --rc geninfo_unexecuted_blocks=1 00:46:58.006 00:46:58.006 ' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:58.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.006 --rc genhtml_branch_coverage=1 00:46:58.006 --rc genhtml_function_coverage=1 00:46:58.006 --rc genhtml_legend=1 00:46:58.006 --rc geninfo_all_blocks=1 00:46:58.006 --rc geninfo_unexecuted_blocks=1 00:46:58.006 00:46:58.006 ' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:58.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:58.006 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:58.007 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:58.266 Cannot find device "nvmf_init_br" 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:58.266 Cannot find device "nvmf_init_br2" 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:46:58.266 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:58.266 Cannot find device "nvmf_tgt_br" 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:58.266 Cannot find device "nvmf_tgt_br2" 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:58.266 Cannot find device "nvmf_init_br" 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:58.266 Cannot find device "nvmf_init_br2" 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:46:58.266 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:58.267 Cannot find device "nvmf_tgt_br" 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:58.267 Cannot find device "nvmf_tgt_br2" 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:58.267 Cannot find device "nvmf_br" 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:58.267 Cannot find device "nvmf_init_if" 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:58.267 Cannot find device "nvmf_init_if2" 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:58.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:58.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:58.267 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:58.526 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:58.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:58.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:46:58.527 00:46:58.527 --- 10.0.0.3 ping statistics --- 00:46:58.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:58.527 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:58.527 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:58.527 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:46:58.527 00:46:58.527 --- 10.0.0.4 ping statistics --- 00:46:58.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:58.527 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:58.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:58.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:46:58.527 00:46:58.527 --- 10.0.0.1 ping statistics --- 00:46:58.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:58.527 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:58.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:58.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:46:58.527 00:46:58.527 --- 10.0.0.2 ping statistics --- 00:46:58.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:58.527 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73244 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73244 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73244 ']' 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:58.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:58.527 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:58.527 [2024-12-09 05:37:45.442273] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:58.527 [2024-12-09 05:37:45.442358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:58.786 [2024-12-09 05:37:45.590435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:58.786 [2024-12-09 05:37:45.618029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:58.786 [2024-12-09 05:37:45.618096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:58.786 [2024-12-09 05:37:45.618122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:58.786 [2024-12-09 05:37:45.618129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:58.786 [2024-12-09 05:37:45.618136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:58.786 [2024-12-09 05:37:45.618473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:58.786 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 [2024-12-09 05:37:45.780032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 Malloc0 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 [2024-12-09 05:37:45.824343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 [2024-12-09 05:37:45.848424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.045 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:46:59.303 [2024-12-09 05:37:46.058718] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:47:00.679 Initializing NVMe Controllers 00:47:00.679 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:47:00.679 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:47:00.679 Initialization complete. Launching workers. 00:47:00.679 ======================================================== 00:47:00.679 Latency(us) 00:47:00.679 Device Information : IOPS MiB/s Average min max 00:47:00.679 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 498.52 62.32 8024.34 6138.12 11049.04 00:47:00.679 ======================================================== 00:47:00.679 Total : 498.52 62.32 8024.34 6138.12 11049.04 00:47:00.679 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:00.679 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:00.680 rmmod nvme_tcp 00:47:00.680 rmmod nvme_fabrics 00:47:00.680 rmmod nvme_keyring 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73244 ']' 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73244 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73244 ']' 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73244 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73244 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:00.680 killing process with pid 73244 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73244' 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73244 00:47:00.680 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73244 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:00.939 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:01.198 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:47:01.198 00:47:01.198 real 0m3.254s 00:47:01.198 user 0m2.677s 00:47:01.198 sys 0m0.718s 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:01.198 ************************************ 00:47:01.198 END TEST nvmf_wait_for_buf 00:47:01.198 ************************************ 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # '[' 0 -eq 1 ']' 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # [[ virt == phy ]] 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@70 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:01.198 ************************************ 00:47:01.198 START TEST nvmf_nsid 00:47:01.198 ************************************ 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:47:01.198 * Looking for test storage... 00:47:01.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:47:01.198 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:01.459 --rc genhtml_branch_coverage=1 00:47:01.459 --rc genhtml_function_coverage=1 00:47:01.459 --rc genhtml_legend=1 00:47:01.459 --rc geninfo_all_blocks=1 00:47:01.459 --rc geninfo_unexecuted_blocks=1 00:47:01.459 00:47:01.459 ' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:01.459 --rc genhtml_branch_coverage=1 00:47:01.459 --rc genhtml_function_coverage=1 00:47:01.459 --rc genhtml_legend=1 00:47:01.459 --rc geninfo_all_blocks=1 00:47:01.459 --rc geninfo_unexecuted_blocks=1 00:47:01.459 00:47:01.459 ' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:01.459 --rc genhtml_branch_coverage=1 00:47:01.459 --rc genhtml_function_coverage=1 00:47:01.459 --rc genhtml_legend=1 00:47:01.459 --rc geninfo_all_blocks=1 00:47:01.459 --rc geninfo_unexecuted_blocks=1 00:47:01.459 00:47:01.459 ' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:01.459 --rc genhtml_branch_coverage=1 00:47:01.459 --rc genhtml_function_coverage=1 00:47:01.459 --rc genhtml_legend=1 00:47:01.459 --rc geninfo_all_blocks=1 00:47:01.459 --rc geninfo_unexecuted_blocks=1 00:47:01.459 00:47:01.459 ' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:01.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:47:01.459 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:01.460 Cannot find device "nvmf_init_br" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:01.460 Cannot find device "nvmf_init_br2" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:01.460 Cannot find device "nvmf_tgt_br" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:01.460 Cannot find device "nvmf_tgt_br2" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:01.460 Cannot find device "nvmf_init_br" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:01.460 Cannot find device "nvmf_init_br2" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:01.460 Cannot find device "nvmf_tgt_br" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:01.460 Cannot find device "nvmf_tgt_br2" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:01.460 Cannot find device "nvmf_br" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:01.460 Cannot find device "nvmf_init_if" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:01.460 Cannot find device "nvmf_init_if2" 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:01.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:01.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:01.460 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:01.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:01.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:47:01.720 00:47:01.720 --- 10.0.0.3 ping statistics --- 00:47:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:01.720 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:01.720 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:01.720 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:47:01.720 00:47:01.720 --- 10.0.0.4 ping statistics --- 00:47:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:01.720 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:01.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:01.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:47:01.720 00:47:01.720 --- 10.0.0.1 ping statistics --- 00:47:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:01.720 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:01.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:01.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:47:01.720 00:47:01.720 --- 10.0.0.2 ping statistics --- 00:47:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:01.720 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:01.720 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73508 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73508 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73508 ']' 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:01.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:01.980 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:47:01.980 [2024-12-09 05:37:48.758125] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:01.980 [2024-12-09 05:37:48.758218] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:01.980 [2024-12-09 05:37:48.903305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:01.980 [2024-12-09 05:37:48.931183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:01.980 [2024-12-09 05:37:48.931248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:01.980 [2024-12-09 05:37:48.931257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:01.980 [2024-12-09 05:37:48.931264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:01.980 [2024-12-09 05:37:48.931270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:01.980 [2024-12-09 05:37:48.931539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:02.239 [2024-12-09 05:37:48.963508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:02.239 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:02.239 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73533 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3d3c2c30-d9b2-41e6-bd95-b3d17f77f1c3 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3ddd72d7-1a31-48b9-b598-35552d653f62 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=14083fce-6966-48eb-baae-aa2550353eba 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:47:02.240 null0 00:47:02.240 null1 00:47:02.240 null2 00:47:02.240 [2024-12-09 05:37:49.119349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:02.240 [2024-12-09 05:37:49.135235] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:02.240 [2024-12-09 05:37:49.135326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73533 ] 00:47:02.240 [2024-12-09 05:37:49.143452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73533 /var/tmp/tgt2.sock 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73533 ']' 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:02.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:02.240 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:47:02.499 [2024-12-09 05:37:49.293461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:02.499 [2024-12-09 05:37:49.331956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:02.499 [2024-12-09 05:37:49.377093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:02.758 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:02.758 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:47:02.759 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:47:03.017 [2024-12-09 05:37:49.944894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:03.017 [2024-12-09 05:37:49.961027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:47:03.279 nvme0n1 nvme0n2 00:47:03.280 nvme1n1 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:47:03.280 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3d3c2c30-d9b2-41e6-bd95-b3d17f77f1c3 00:47:04.215 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3d3c2c30d9b241e6bd95b3d17f77f1c3 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3D3C2C30D9B241E6BD95B3D17F77F1C3 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3D3C2C30D9B241E6BD95B3D17F77F1C3 == \3\D\3\C\2\C\3\0\D\9\B\2\4\1\E\6\B\D\9\5\B\3\D\1\7\F\7\7\F\1\C\3 ]] 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3ddd72d7-1a31-48b9-b598-35552d653f62 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3ddd72d71a3148b9b59835552d653f62 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3DDD72D71A3148B9B59835552D653F62 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3DDD72D71A3148B9B59835552D653F62 == \3\D\D\D\7\2\D\7\1\A\3\1\4\8\B\9\B\5\9\8\3\5\5\5\2\D\6\5\3\F\6\2 ]] 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:47:04.472 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 14083fce-6966-48eb-baae-aa2550353eba 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=14083fce696648ebbaaeaa2550353eba 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 14083FCE696648EBBAAEAA2550353EBA 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 14083FCE696648EBBAAEAA2550353EBA == \1\4\0\8\3\F\C\E\6\9\6\6\4\8\E\B\B\A\A\E\A\A\2\5\5\0\3\5\3\E\B\A ]] 00:47:04.473 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73533 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73533 ']' 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73533 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73533 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73533' 00:47:04.731 killing process with pid 73533 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73533 00:47:04.731 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73533 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:04.990 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:04.990 rmmod nvme_tcp 00:47:05.249 rmmod nvme_fabrics 00:47:05.249 rmmod nvme_keyring 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73508 ']' 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73508 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73508 ']' 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73508 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73508 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:05.249 killing process with pid 73508 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73508' 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73508 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73508 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:05.249 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:47:05.507 00:47:05.507 real 0m4.393s 00:47:05.507 user 0m6.587s 00:47:05.507 sys 0m1.529s 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:05.507 ************************************ 00:47:05.507 END TEST nvmf_nsid 00:47:05.507 ************************************ 00:47:05.507 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:47:05.766 05:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@72 -- # trap - SIGINT SIGTERM EXIT 00:47:05.766 00:47:05.766 real 5m3.534s 00:47:05.766 user 10m41.674s 00:47:05.766 sys 1m4.492s 00:47:05.766 05:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:05.766 05:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:05.766 ************************************ 00:47:05.766 END TEST nvmf_target_extra 00:47:05.766 ************************************ 00:47:05.766 05:37:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:47:05.766 05:37:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:05.766 05:37:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:05.766 05:37:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:05.766 ************************************ 00:47:05.766 START TEST nvmf_host 00:47:05.766 ************************************ 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:47:05.766 * Looking for test storage... 00:47:05.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:47:05.766 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:06.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.035 --rc genhtml_branch_coverage=1 00:47:06.035 --rc genhtml_function_coverage=1 00:47:06.035 --rc genhtml_legend=1 00:47:06.035 --rc geninfo_all_blocks=1 00:47:06.035 --rc geninfo_unexecuted_blocks=1 00:47:06.035 00:47:06.035 ' 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:06.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.035 --rc genhtml_branch_coverage=1 00:47:06.035 --rc genhtml_function_coverage=1 00:47:06.035 --rc genhtml_legend=1 00:47:06.035 --rc geninfo_all_blocks=1 00:47:06.035 --rc geninfo_unexecuted_blocks=1 00:47:06.035 00:47:06.035 ' 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:06.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.035 --rc genhtml_branch_coverage=1 00:47:06.035 --rc genhtml_function_coverage=1 00:47:06.035 --rc genhtml_legend=1 00:47:06.035 --rc geninfo_all_blocks=1 00:47:06.035 --rc geninfo_unexecuted_blocks=1 00:47:06.035 00:47:06.035 ' 00:47:06.035 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:06.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.035 --rc genhtml_branch_coverage=1 00:47:06.036 --rc genhtml_function_coverage=1 00:47:06.036 --rc genhtml_legend=1 00:47:06.036 --rc geninfo_all_blocks=1 00:47:06.036 --rc geninfo_unexecuted_blocks=1 00:47:06.036 00:47:06.036 ' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:06.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:47:06.036 ************************************ 00:47:06.036 START TEST nvmf_identify 00:47:06.036 ************************************ 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:47:06.036 * Looking for test storage... 00:47:06.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:06.036 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.037 --rc genhtml_branch_coverage=1 00:47:06.037 --rc genhtml_function_coverage=1 00:47:06.037 --rc genhtml_legend=1 00:47:06.037 --rc geninfo_all_blocks=1 00:47:06.037 --rc geninfo_unexecuted_blocks=1 00:47:06.037 00:47:06.037 ' 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.037 --rc genhtml_branch_coverage=1 00:47:06.037 --rc genhtml_function_coverage=1 00:47:06.037 --rc genhtml_legend=1 00:47:06.037 --rc geninfo_all_blocks=1 00:47:06.037 --rc geninfo_unexecuted_blocks=1 00:47:06.037 00:47:06.037 ' 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.037 --rc genhtml_branch_coverage=1 00:47:06.037 --rc genhtml_function_coverage=1 00:47:06.037 --rc genhtml_legend=1 00:47:06.037 --rc geninfo_all_blocks=1 00:47:06.037 --rc geninfo_unexecuted_blocks=1 00:47:06.037 00:47:06.037 ' 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:06.037 --rc genhtml_branch_coverage=1 00:47:06.037 --rc genhtml_function_coverage=1 00:47:06.037 --rc genhtml_legend=1 00:47:06.037 --rc geninfo_all_blocks=1 00:47:06.037 --rc geninfo_unexecuted_blocks=1 00:47:06.037 00:47:06.037 ' 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:06.037 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:06.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:06.038 05:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:06.038 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:06.296 Cannot find device "nvmf_init_br" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:06.296 Cannot find device "nvmf_init_br2" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:06.296 Cannot find device "nvmf_tgt_br" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:06.296 Cannot find device "nvmf_tgt_br2" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:06.296 Cannot find device "nvmf_init_br" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:06.296 Cannot find device "nvmf_init_br2" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:06.296 Cannot find device "nvmf_tgt_br" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:06.296 Cannot find device "nvmf_tgt_br2" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:06.296 Cannot find device "nvmf_br" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:06.296 Cannot find device "nvmf_init_if" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:06.296 Cannot find device "nvmf_init_if2" 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:47:06.296 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:06.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:06.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:06.297 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:06.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:06.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:47:06.555 00:47:06.555 --- 10.0.0.3 ping statistics --- 00:47:06.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:06.555 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:06.555 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:06.555 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:47:06.555 00:47:06.555 --- 10.0.0.4 ping statistics --- 00:47:06.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:06.555 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:06.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:06.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:47:06.555 00:47:06.555 --- 10.0.0.1 ping statistics --- 00:47:06.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:06.555 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:06.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:06.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:47:06.555 00:47:06.555 --- 10.0.0.2 ping statistics --- 00:47:06.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:06.555 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73884 00:47:06.555 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73884 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73884 ']' 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:06.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:06.556 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:06.556 [2024-12-09 05:37:53.440604] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:06.556 [2024-12-09 05:37:53.440692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:06.813 [2024-12-09 05:37:53.597833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:06.813 [2024-12-09 05:37:53.640180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:06.813 [2024-12-09 05:37:53.640242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:06.813 [2024-12-09 05:37:53.640256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:06.813 [2024-12-09 05:37:53.640267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:06.813 [2024-12-09 05:37:53.640276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:06.813 [2024-12-09 05:37:53.641252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:06.813 [2024-12-09 05:37:53.641402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:06.813 [2024-12-09 05:37:53.641522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:06.813 [2024-12-09 05:37:53.641525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:06.813 [2024-12-09 05:37:53.680258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:06.813 [2024-12-09 05:37:53.746831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:06.813 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.071 Malloc0 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.071 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.071 [2024-12-09 05:37:53.852759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.072 [ 00:47:07.072 { 00:47:07.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:07.072 "subtype": "Discovery", 00:47:07.072 "listen_addresses": [ 00:47:07.072 { 00:47:07.072 "trtype": "TCP", 00:47:07.072 "adrfam": "IPv4", 00:47:07.072 "traddr": "10.0.0.3", 00:47:07.072 "trsvcid": "4420" 00:47:07.072 } 00:47:07.072 ], 00:47:07.072 "allow_any_host": true, 00:47:07.072 "hosts": [] 00:47:07.072 }, 00:47:07.072 { 00:47:07.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:07.072 "subtype": "NVMe", 00:47:07.072 "listen_addresses": [ 00:47:07.072 { 00:47:07.072 "trtype": "TCP", 00:47:07.072 "adrfam": "IPv4", 00:47:07.072 "traddr": "10.0.0.3", 00:47:07.072 "trsvcid": "4420" 00:47:07.072 } 00:47:07.072 ], 00:47:07.072 "allow_any_host": true, 00:47:07.072 "hosts": [], 00:47:07.072 "serial_number": "SPDK00000000000001", 00:47:07.072 "model_number": "SPDK bdev Controller", 00:47:07.072 "max_namespaces": 32, 00:47:07.072 "min_cntlid": 1, 00:47:07.072 "max_cntlid": 65519, 00:47:07.072 "namespaces": [ 00:47:07.072 { 00:47:07.072 "nsid": 1, 00:47:07.072 "bdev_name": "Malloc0", 00:47:07.072 "name": "Malloc0", 00:47:07.072 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:47:07.072 "eui64": "ABCDEF0123456789", 00:47:07.072 "uuid": "39588ada-e178-4b1a-a256-f069028d2189" 00:47:07.072 } 00:47:07.072 ] 00:47:07.072 } 00:47:07.072 ] 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.072 05:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:47:07.072 [2024-12-09 05:37:53.910255] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:07.072 [2024-12-09 05:37:53.910310] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73906 ] 00:47:07.333 [2024-12-09 05:37:54.077814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:47:07.333 [2024-12-09 05:37:54.077876] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:47:07.333 [2024-12-09 05:37:54.077883] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:47:07.333 [2024-12-09 05:37:54.077914] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:47:07.333 [2024-12-09 05:37:54.077939] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:47:07.333 [2024-12-09 05:37:54.078290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:47:07.333 [2024-12-09 05:37:54.078355] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe4f750 0 00:47:07.333 [2024-12-09 05:37:54.083660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:47:07.333 [2024-12-09 05:37:54.083688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:47:07.333 [2024-12-09 05:37:54.083695] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:47:07.333 [2024-12-09 05:37:54.083699] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:47:07.333 [2024-12-09 05:37:54.083735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.083744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.083749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.333 [2024-12-09 05:37:54.083762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:47:07.333 [2024-12-09 05:37:54.083795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.333 [2024-12-09 05:37:54.090683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.333 [2024-12-09 05:37:54.090705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.333 [2024-12-09 05:37:54.090711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.090716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.333 [2024-12-09 05:37:54.090727] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:47:07.333 [2024-12-09 05:37:54.090735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:47:07.333 [2024-12-09 05:37:54.090742] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:47:07.333 [2024-12-09 05:37:54.090759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.090765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.090769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.333 [2024-12-09 05:37:54.090779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.333 [2024-12-09 05:37:54.090806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.333 [2024-12-09 05:37:54.090873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.333 [2024-12-09 05:37:54.090881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.333 [2024-12-09 05:37:54.090885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.090889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.333 [2024-12-09 05:37:54.090910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:47:07.333 [2024-12-09 05:37:54.090919] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:47:07.333 [2024-12-09 05:37:54.090927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.090932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.090950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.333 [2024-12-09 05:37:54.090958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.333 [2024-12-09 05:37:54.090979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.333 [2024-12-09 05:37:54.091022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.333 [2024-12-09 05:37:54.091030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.333 [2024-12-09 05:37:54.091033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.333 [2024-12-09 05:37:54.091043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:47:07.333 [2024-12-09 05:37:54.091052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:47:07.333 [2024-12-09 05:37:54.091060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.333 [2024-12-09 05:37:54.091076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.333 [2024-12-09 05:37:54.091094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.333 [2024-12-09 05:37:54.091144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.333 [2024-12-09 05:37:54.091151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.333 [2024-12-09 05:37:54.091155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.333 [2024-12-09 05:37:54.091165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:47:07.333 [2024-12-09 05:37:54.091176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.333 [2024-12-09 05:37:54.091192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.333 [2024-12-09 05:37:54.091210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.333 [2024-12-09 05:37:54.091252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.333 [2024-12-09 05:37:54.091258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.333 [2024-12-09 05:37:54.091262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.333 [2024-12-09 05:37:54.091271] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:47:07.333 [2024-12-09 05:37:54.091277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:47:07.333 [2024-12-09 05:37:54.091285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:47:07.333 [2024-12-09 05:37:54.091391] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:47:07.333 [2024-12-09 05:37:54.091396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:47:07.333 [2024-12-09 05:37:54.091405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.333 [2024-12-09 05:37:54.091414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.333 [2024-12-09 05:37:54.091421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.333 [2024-12-09 05:37:54.091440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.333 [2024-12-09 05:37:54.091485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.334 [2024-12-09 05:37:54.091497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.334 [2024-12-09 05:37:54.091502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.334 [2024-12-09 05:37:54.091512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:47:07.334 [2024-12-09 05:37:54.091523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.091540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.334 [2024-12-09 05:37:54.091585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.334 [2024-12-09 05:37:54.091629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.334 [2024-12-09 05:37:54.091637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.334 [2024-12-09 05:37:54.091641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.334 [2024-12-09 05:37:54.091651] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:47:07.334 [2024-12-09 05:37:54.091657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:47:07.334 [2024-12-09 05:37:54.091665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:47:07.334 [2024-12-09 05:37:54.091676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:47:07.334 [2024-12-09 05:37:54.091687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.091700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.334 [2024-12-09 05:37:54.091722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.334 [2024-12-09 05:37:54.091805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.334 [2024-12-09 05:37:54.091813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.334 [2024-12-09 05:37:54.091817] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091821] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe4f750): datao=0, datal=4096, cccid=0 00:47:07.334 [2024-12-09 05:37:54.091827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb3740) on tqpair(0xe4f750): expected_datao=0, payload_size=4096 00:47:07.334 [2024-12-09 05:37:54.091832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091840] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091844] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.334 [2024-12-09 05:37:54.091860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.334 [2024-12-09 05:37:54.091863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.334 [2024-12-09 05:37:54.091877] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:47:07.334 [2024-12-09 05:37:54.091882] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:47:07.334 [2024-12-09 05:37:54.091887] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:47:07.334 [2024-12-09 05:37:54.091897] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:47:07.334 [2024-12-09 05:37:54.091903] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:47:07.334 [2024-12-09 05:37:54.091908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:47:07.334 [2024-12-09 05:37:54.091918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:47:07.334 [2024-12-09 05:37:54.091941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.091965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.091973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:07.334 [2024-12-09 05:37:54.091993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.334 [2024-12-09 05:37:54.092043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.334 [2024-12-09 05:37:54.092050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.334 [2024-12-09 05:37:54.092053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.334 [2024-12-09 05:37:54.092066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.092081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.334 [2024-12-09 05:37:54.092087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.092100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.334 [2024-12-09 05:37:54.092106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.092120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.334 [2024-12-09 05:37:54.092126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.092139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.334 [2024-12-09 05:37:54.092144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:47:07.334 [2024-12-09 05:37:54.092153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:47:07.334 [2024-12-09 05:37:54.092160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.092171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.334 [2024-12-09 05:37:54.092199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3740, cid 0, qid 0 00:47:07.334 [2024-12-09 05:37:54.092207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb38c0, cid 1, qid 0 00:47:07.334 [2024-12-09 05:37:54.092211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3a40, cid 2, qid 0 00:47:07.334 [2024-12-09 05:37:54.092216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.334 [2024-12-09 05:37:54.092221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3d40, cid 4, qid 0 00:47:07.334 [2024-12-09 05:37:54.092300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.334 [2024-12-09 05:37:54.092307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.334 [2024-12-09 05:37:54.092310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3d40) on tqpair=0xe4f750 00:47:07.334 [2024-12-09 05:37:54.092320] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:47:07.334 [2024-12-09 05:37:54.092325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:47:07.334 [2024-12-09 05:37:54.092337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe4f750) 00:47:07.334 [2024-12-09 05:37:54.092349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.334 [2024-12-09 05:37:54.092368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3d40, cid 4, qid 0 00:47:07.334 [2024-12-09 05:37:54.092420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.334 [2024-12-09 05:37:54.092427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.334 [2024-12-09 05:37:54.092431] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092435] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe4f750): datao=0, datal=4096, cccid=4 00:47:07.334 [2024-12-09 05:37:54.092440] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb3d40) on tqpair(0xe4f750): expected_datao=0, payload_size=4096 00:47:07.334 [2024-12-09 05:37:54.092444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092451] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092455] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.334 [2024-12-09 05:37:54.092470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.334 [2024-12-09 05:37:54.092474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.334 [2024-12-09 05:37:54.092478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3d40) on tqpair=0xe4f750 00:47:07.334 [2024-12-09 05:37:54.092491] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:47:07.334 [2024-12-09 05:37:54.092516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe4f750) 00:47:07.335 [2024-12-09 05:37:54.092529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.335 [2024-12-09 05:37:54.092537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe4f750) 00:47:07.335 [2024-12-09 05:37:54.092551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.335 [2024-12-09 05:37:54.092591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3d40, cid 4, qid 0 00:47:07.335 [2024-12-09 05:37:54.092629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3ec0, cid 5, qid 0 00:47:07.335 [2024-12-09 05:37:54.092717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.335 [2024-12-09 05:37:54.092725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.335 [2024-12-09 05:37:54.092729] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092733] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe4f750): datao=0, datal=1024, cccid=4 00:47:07.335 [2024-12-09 05:37:54.092738] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb3d40) on tqpair(0xe4f750): expected_datao=0, payload_size=1024 00:47:07.335 [2024-12-09 05:37:54.092743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092750] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092754] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.335 [2024-12-09 05:37:54.092766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.335 [2024-12-09 05:37:54.092770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3ec0) on tqpair=0xe4f750 00:47:07.335 [2024-12-09 05:37:54.092796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.335 [2024-12-09 05:37:54.092804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.335 [2024-12-09 05:37:54.092808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3d40) on tqpair=0xe4f750 00:47:07.335 [2024-12-09 05:37:54.092825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe4f750) 00:47:07.335 [2024-12-09 05:37:54.092839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.335 [2024-12-09 05:37:54.092865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3d40, cid 4, qid 0 00:47:07.335 [2024-12-09 05:37:54.092960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.335 [2024-12-09 05:37:54.092967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.335 [2024-12-09 05:37:54.092971] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092974] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe4f750): datao=0, datal=3072, cccid=4 00:47:07.335 [2024-12-09 05:37:54.092979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb3d40) on tqpair(0xe4f750): expected_datao=0, payload_size=3072 00:47:07.335 [2024-12-09 05:37:54.092983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092990] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.092994] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.335 [2024-12-09 05:37:54.093008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.335 [2024-12-09 05:37:54.093028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3d40) on tqpair=0xe4f750 00:47:07.335 [2024-12-09 05:37:54.093042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe4f750) 00:47:07.335 [2024-12-09 05:37:54.093055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.335 [2024-12-09 05:37:54.093078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3d40, cid 4, qid 0 00:47:07.335 [2024-12-09 05:37:54.093140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.335 [2024-12-09 05:37:54.093147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.335 [2024-12-09 05:37:54.093151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093155] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe4f750): datao=0, datal=8, cccid=4 00:47:07.335 [2024-12-09 05:37:54.093160] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeb3d40) on tqpair(0xe4f750): expected_datao=0, payload_size=8 00:47:07.335 [2024-12-09 05:37:54.093164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093175] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.335 [2024-12-09 05:37:54.093197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.335 [2024-12-09 05:37:54.093201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.335 [2024-12-09 05:37:54.093205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3d40) on tqpair=0xe4f750 00:47:07.335 ===================================================== 00:47:07.335 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:47:07.335 ===================================================== 00:47:07.335 Controller Capabilities/Features 00:47:07.335 ================================ 00:47:07.335 Vendor ID: 0000 00:47:07.335 Subsystem Vendor ID: 0000 00:47:07.335 Serial Number: .................... 00:47:07.335 Model Number: ........................................ 00:47:07.335 Firmware Version: 25.01 00:47:07.335 Recommended Arb Burst: 0 00:47:07.335 IEEE OUI Identifier: 00 00 00 00:47:07.335 Multi-path I/O 00:47:07.335 May have multiple subsystem ports: No 00:47:07.335 May have multiple controllers: No 00:47:07.335 Associated with SR-IOV VF: No 00:47:07.335 Max Data Transfer Size: 131072 00:47:07.335 Max Number of Namespaces: 0 00:47:07.335 Max Number of I/O Queues: 1024 00:47:07.335 NVMe Specification Version (VS): 1.3 00:47:07.335 NVMe Specification Version (Identify): 1.3 00:47:07.335 Maximum Queue Entries: 128 00:47:07.335 Contiguous Queues Required: Yes 00:47:07.335 Arbitration Mechanisms Supported 00:47:07.335 Weighted Round Robin: Not Supported 00:47:07.335 Vendor Specific: Not Supported 00:47:07.335 Reset Timeout: 15000 ms 00:47:07.335 Doorbell Stride: 4 bytes 00:47:07.335 NVM Subsystem Reset: Not Supported 00:47:07.335 Command Sets Supported 00:47:07.335 NVM Command Set: Supported 00:47:07.335 Boot Partition: Not Supported 00:47:07.335 Memory Page Size Minimum: 4096 bytes 00:47:07.335 Memory Page Size Maximum: 4096 bytes 00:47:07.335 Persistent Memory Region: Not Supported 00:47:07.335 Optional Asynchronous Events Supported 00:47:07.335 Namespace Attribute Notices: Not Supported 00:47:07.335 Firmware Activation Notices: Not Supported 00:47:07.335 ANA Change Notices: Not Supported 00:47:07.335 PLE Aggregate Log Change Notices: Not Supported 00:47:07.335 LBA Status Info Alert Notices: Not Supported 00:47:07.335 EGE Aggregate Log Change Notices: Not Supported 00:47:07.335 Normal NVM Subsystem Shutdown event: Not Supported 00:47:07.335 Zone Descriptor Change Notices: Not Supported 00:47:07.335 Discovery Log Change Notices: Supported 00:47:07.335 Controller Attributes 00:47:07.335 128-bit Host Identifier: Not Supported 00:47:07.335 Non-Operational Permissive Mode: Not Supported 00:47:07.335 NVM Sets: Not Supported 00:47:07.335 Read Recovery Levels: Not Supported 00:47:07.335 Endurance Groups: Not Supported 00:47:07.335 Predictable Latency Mode: Not Supported 00:47:07.335 Traffic Based Keep ALive: Not Supported 00:47:07.335 Namespace Granularity: Not Supported 00:47:07.335 SQ Associations: Not Supported 00:47:07.335 UUID List: Not Supported 00:47:07.335 Multi-Domain Subsystem: Not Supported 00:47:07.335 Fixed Capacity Management: Not Supported 00:47:07.335 Variable Capacity Management: Not Supported 00:47:07.335 Delete Endurance Group: Not Supported 00:47:07.335 Delete NVM Set: Not Supported 00:47:07.335 Extended LBA Formats Supported: Not Supported 00:47:07.335 Flexible Data Placement Supported: Not Supported 00:47:07.335 00:47:07.335 Controller Memory Buffer Support 00:47:07.335 ================================ 00:47:07.335 Supported: No 00:47:07.335 00:47:07.335 Persistent Memory Region Support 00:47:07.335 ================================ 00:47:07.335 Supported: No 00:47:07.335 00:47:07.335 Admin Command Set Attributes 00:47:07.335 ============================ 00:47:07.335 Security Send/Receive: Not Supported 00:47:07.335 Format NVM: Not Supported 00:47:07.335 Firmware Activate/Download: Not Supported 00:47:07.335 Namespace Management: Not Supported 00:47:07.335 Device Self-Test: Not Supported 00:47:07.335 Directives: Not Supported 00:47:07.335 NVMe-MI: Not Supported 00:47:07.335 Virtualization Management: Not Supported 00:47:07.335 Doorbell Buffer Config: Not Supported 00:47:07.335 Get LBA Status Capability: Not Supported 00:47:07.335 Command & Feature Lockdown Capability: Not Supported 00:47:07.335 Abort Command Limit: 1 00:47:07.335 Async Event Request Limit: 4 00:47:07.335 Number of Firmware Slots: N/A 00:47:07.335 Firmware Slot 1 Read-Only: N/A 00:47:07.335 Firmware Activation Without Reset: N/A 00:47:07.335 Multiple Update Detection Support: N/A 00:47:07.336 Firmware Update Granularity: No Information Provided 00:47:07.336 Per-Namespace SMART Log: No 00:47:07.336 Asymmetric Namespace Access Log Page: Not Supported 00:47:07.336 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:47:07.336 Command Effects Log Page: Not Supported 00:47:07.336 Get Log Page Extended Data: Supported 00:47:07.336 Telemetry Log Pages: Not Supported 00:47:07.336 Persistent Event Log Pages: Not Supported 00:47:07.336 Supported Log Pages Log Page: May Support 00:47:07.336 Commands Supported & Effects Log Page: Not Supported 00:47:07.336 Feature Identifiers & Effects Log Page:May Support 00:47:07.336 NVMe-MI Commands & Effects Log Page: May Support 00:47:07.336 Data Area 4 for Telemetry Log: Not Supported 00:47:07.336 Error Log Page Entries Supported: 128 00:47:07.336 Keep Alive: Not Supported 00:47:07.336 00:47:07.336 NVM Command Set Attributes 00:47:07.336 ========================== 00:47:07.336 Submission Queue Entry Size 00:47:07.336 Max: 1 00:47:07.336 Min: 1 00:47:07.336 Completion Queue Entry Size 00:47:07.336 Max: 1 00:47:07.336 Min: 1 00:47:07.336 Number of Namespaces: 0 00:47:07.336 Compare Command: Not Supported 00:47:07.336 Write Uncorrectable Command: Not Supported 00:47:07.336 Dataset Management Command: Not Supported 00:47:07.336 Write Zeroes Command: Not Supported 00:47:07.336 Set Features Save Field: Not Supported 00:47:07.336 Reservations: Not Supported 00:47:07.336 Timestamp: Not Supported 00:47:07.336 Copy: Not Supported 00:47:07.336 Volatile Write Cache: Not Present 00:47:07.336 Atomic Write Unit (Normal): 1 00:47:07.336 Atomic Write Unit (PFail): 1 00:47:07.336 Atomic Compare & Write Unit: 1 00:47:07.336 Fused Compare & Write: Supported 00:47:07.336 Scatter-Gather List 00:47:07.336 SGL Command Set: Supported 00:47:07.336 SGL Keyed: Supported 00:47:07.336 SGL Bit Bucket Descriptor: Not Supported 00:47:07.336 SGL Metadata Pointer: Not Supported 00:47:07.336 Oversized SGL: Not Supported 00:47:07.336 SGL Metadata Address: Not Supported 00:47:07.336 SGL Offset: Supported 00:47:07.336 Transport SGL Data Block: Not Supported 00:47:07.336 Replay Protected Memory Block: Not Supported 00:47:07.336 00:47:07.336 Firmware Slot Information 00:47:07.336 ========================= 00:47:07.336 Active slot: 0 00:47:07.336 00:47:07.336 00:47:07.336 Error Log 00:47:07.336 ========= 00:47:07.336 00:47:07.336 Active Namespaces 00:47:07.336 ================= 00:47:07.336 Discovery Log Page 00:47:07.336 ================== 00:47:07.336 Generation Counter: 2 00:47:07.336 Number of Records: 2 00:47:07.336 Record Format: 0 00:47:07.336 00:47:07.336 Discovery Log Entry 0 00:47:07.336 ---------------------- 00:47:07.336 Transport Type: 3 (TCP) 00:47:07.336 Address Family: 1 (IPv4) 00:47:07.336 Subsystem Type: 3 (Current Discovery Subsystem) 00:47:07.336 Entry Flags: 00:47:07.336 Duplicate Returned Information: 1 00:47:07.336 Explicit Persistent Connection Support for Discovery: 1 00:47:07.336 Transport Requirements: 00:47:07.336 Secure Channel: Not Required 00:47:07.336 Port ID: 0 (0x0000) 00:47:07.336 Controller ID: 65535 (0xffff) 00:47:07.336 Admin Max SQ Size: 128 00:47:07.336 Transport Service Identifier: 4420 00:47:07.336 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:47:07.336 Transport Address: 10.0.0.3 00:47:07.336 Discovery Log Entry 1 00:47:07.336 ---------------------- 00:47:07.336 Transport Type: 3 (TCP) 00:47:07.336 Address Family: 1 (IPv4) 00:47:07.336 Subsystem Type: 2 (NVM Subsystem) 00:47:07.336 Entry Flags: 00:47:07.336 Duplicate Returned Information: 0 00:47:07.336 Explicit Persistent Connection Support for Discovery: 0 00:47:07.336 Transport Requirements: 00:47:07.336 Secure Channel: Not Required 00:47:07.336 Port ID: 0 (0x0000) 00:47:07.336 Controller ID: 65535 (0xffff) 00:47:07.336 Admin Max SQ Size: 128 00:47:07.336 Transport Service Identifier: 4420 00:47:07.336 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:47:07.336 Transport Address: 10.0.0.3 [2024-12-09 05:37:54.093293] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:47:07.336 [2024-12-09 05:37:54.093306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3740) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.336 [2024-12-09 05:37:54.093320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb38c0) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.336 [2024-12-09 05:37:54.093330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3a40) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.336 [2024-12-09 05:37:54.093354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.336 [2024-12-09 05:37:54.093371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.336 [2024-12-09 05:37:54.093388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.336 [2024-12-09 05:37:54.093410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.336 [2024-12-09 05:37:54.093454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.336 [2024-12-09 05:37:54.093461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.336 [2024-12-09 05:37:54.093465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.336 [2024-12-09 05:37:54.093493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.336 [2024-12-09 05:37:54.093515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.336 [2024-12-09 05:37:54.093587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.336 [2024-12-09 05:37:54.093594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.336 [2024-12-09 05:37:54.093598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093608] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:47:07.336 [2024-12-09 05:37:54.093613] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:47:07.336 [2024-12-09 05:37:54.093624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.336 [2024-12-09 05:37:54.093657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.336 [2024-12-09 05:37:54.093678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.336 [2024-12-09 05:37:54.093730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.336 [2024-12-09 05:37:54.093737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.336 [2024-12-09 05:37:54.093741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.336 [2024-12-09 05:37:54.093774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.336 [2024-12-09 05:37:54.093793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.336 [2024-12-09 05:37:54.093838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.336 [2024-12-09 05:37:54.093845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.336 [2024-12-09 05:37:54.093849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.336 [2024-12-09 05:37:54.093864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.336 [2024-12-09 05:37:54.093873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.336 [2024-12-09 05:37:54.093881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.336 [2024-12-09 05:37:54.093899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.093972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.093979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.093982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.093986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.093997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.337 [2024-12-09 05:37:54.094012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.337 [2024-12-09 05:37:54.094030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.094077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.094089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.094093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.094108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.337 [2024-12-09 05:37:54.094124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.337 [2024-12-09 05:37:54.094142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.094188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.094204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.094226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.094243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.337 [2024-12-09 05:37:54.094261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.337 [2024-12-09 05:37:54.094281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.094330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.094338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.094342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.094358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.337 [2024-12-09 05:37:54.094375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.337 [2024-12-09 05:37:54.094393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.094436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.094444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.094448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.094464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.337 [2024-12-09 05:37:54.094481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.337 [2024-12-09 05:37:54.094499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.094581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.094599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.094603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.094607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.094618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.098644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.098653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe4f750) 00:47:07.337 [2024-12-09 05:37:54.098662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.337 [2024-12-09 05:37:54.098691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeb3bc0, cid 3, qid 0 00:47:07.337 [2024-12-09 05:37:54.098745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.337 [2024-12-09 05:37:54.098752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.337 [2024-12-09 05:37:54.098756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.337 [2024-12-09 05:37:54.098761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeb3bc0) on tqpair=0xe4f750 00:47:07.337 [2024-12-09 05:37:54.098770] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:47:07.337 00:47:07.337 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:47:07.337 [2024-12-09 05:37:54.223297] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:07.337 [2024-12-09 05:37:54.223350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73919 ] 00:47:07.599 [2024-12-09 05:37:54.383060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:47:07.599 [2024-12-09 05:37:54.383128] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:47:07.599 [2024-12-09 05:37:54.383134] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:47:07.599 [2024-12-09 05:37:54.383148] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:47:07.599 [2024-12-09 05:37:54.383156] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:47:07.599 [2024-12-09 05:37:54.383387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:47:07.599 [2024-12-09 05:37:54.383443] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1136750 0 00:47:07.599 [2024-12-09 05:37:54.396603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:47:07.599 [2024-12-09 05:37:54.396625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:47:07.599 [2024-12-09 05:37:54.396646] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:47:07.599 [2024-12-09 05:37:54.396650] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:47:07.599 [2024-12-09 05:37:54.396680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.599 [2024-12-09 05:37:54.396686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.599 [2024-12-09 05:37:54.396690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.599 [2024-12-09 05:37:54.396701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:47:07.599 [2024-12-09 05:37:54.396730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.599 [2024-12-09 05:37:54.404614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.599 [2024-12-09 05:37:54.404635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.599 [2024-12-09 05:37:54.404656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.599 [2024-12-09 05:37:54.404661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.599 [2024-12-09 05:37:54.404674] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:47:07.599 [2024-12-09 05:37:54.404681] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:47:07.599 [2024-12-09 05:37:54.404688] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:47:07.599 [2024-12-09 05:37:54.404703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.599 [2024-12-09 05:37:54.404708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.599 [2024-12-09 05:37:54.404712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.404721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.404748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.404799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.404806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.404809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.404813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.404818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:47:07.600 [2024-12-09 05:37:54.404825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:47:07.600 [2024-12-09 05:37:54.404832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.404837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.404840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.404847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.404865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.404923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.404930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.404933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.404937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.404943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:47:07.600 [2024-12-09 05:37:54.404950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:47:07.600 [2024-12-09 05:37:54.404958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.404962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.404965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.404973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.404989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.405036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.405043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.405046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.405055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:47:07.600 [2024-12-09 05:37:54.405065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.405080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.405097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.405140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.405147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.405150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.405159] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:47:07.600 [2024-12-09 05:37:54.405164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:47:07.600 [2024-12-09 05:37:54.405171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:47:07.600 [2024-12-09 05:37:54.405277] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:47:07.600 [2024-12-09 05:37:54.405283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:47:07.600 [2024-12-09 05:37:54.405291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.405306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.405324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.405368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.405375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.405378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.405387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:47:07.600 [2024-12-09 05:37:54.405397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.405412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.405428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.405475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.405481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.405485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.405493] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:47:07.600 [2024-12-09 05:37:54.405498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:47:07.600 [2024-12-09 05:37:54.405506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:47:07.600 [2024-12-09 05:37:54.405516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:47:07.600 [2024-12-09 05:37:54.405526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.405538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.600 [2024-12-09 05:37:54.405557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.405673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.600 [2024-12-09 05:37:54.405681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.600 [2024-12-09 05:37:54.405685] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405689] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=4096, cccid=0 00:47:07.600 [2024-12-09 05:37:54.405694] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119a740) on tqpair(0x1136750): expected_datao=0, payload_size=4096 00:47:07.600 [2024-12-09 05:37:54.405698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405706] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405710] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.405724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.405727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.405740] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:47:07.600 [2024-12-09 05:37:54.405745] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:47:07.600 [2024-12-09 05:37:54.405749] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:47:07.600 [2024-12-09 05:37:54.405758] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:47:07.600 [2024-12-09 05:37:54.405763] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:47:07.600 [2024-12-09 05:37:54.405769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:47:07.600 [2024-12-09 05:37:54.405778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:47:07.600 [2024-12-09 05:37:54.405785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.600 [2024-12-09 05:37:54.405801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:07.600 [2024-12-09 05:37:54.405823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.600 [2024-12-09 05:37:54.405872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.600 [2024-12-09 05:37:54.405879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.600 [2024-12-09 05:37:54.405882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.600 [2024-12-09 05:37:54.405886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.600 [2024-12-09 05:37:54.405894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.405909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.601 [2024-12-09 05:37:54.405915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.405928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.601 [2024-12-09 05:37:54.405934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.405947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.601 [2024-12-09 05:37:54.405953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.405975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.405981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.601 [2024-12-09 05:37:54.405986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.405994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.406012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.601 [2024-12-09 05:37:54.406035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a740, cid 0, qid 0 00:47:07.601 [2024-12-09 05:37:54.406042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119a8c0, cid 1, qid 0 00:47:07.601 [2024-12-09 05:37:54.406047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119aa40, cid 2, qid 0 00:47:07.601 [2024-12-09 05:37:54.406052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119abc0, cid 3, qid 0 00:47:07.601 [2024-12-09 05:37:54.406056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.601 [2024-12-09 05:37:54.406138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.601 [2024-12-09 05:37:54.406145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.601 [2024-12-09 05:37:54.406148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.601 [2024-12-09 05:37:54.406157] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:47:07.601 [2024-12-09 05:37:54.406162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.406224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:07.601 [2024-12-09 05:37:54.406245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.601 [2024-12-09 05:37:54.406295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.601 [2024-12-09 05:37:54.406302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.601 [2024-12-09 05:37:54.406306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.601 [2024-12-09 05:37:54.406375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.406408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.601 [2024-12-09 05:37:54.406428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.601 [2024-12-09 05:37:54.406487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.601 [2024-12-09 05:37:54.406494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.601 [2024-12-09 05:37:54.406498] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406502] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=4096, cccid=4 00:47:07.601 [2024-12-09 05:37:54.406507] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119ad40) on tqpair(0x1136750): expected_datao=0, payload_size=4096 00:47:07.601 [2024-12-09 05:37:54.406511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406519] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406537] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.601 [2024-12-09 05:37:54.406551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.601 [2024-12-09 05:37:54.406555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.601 [2024-12-09 05:37:54.406583] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:47:07.601 [2024-12-09 05:37:54.406594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.406638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.601 [2024-12-09 05:37:54.406659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.601 [2024-12-09 05:37:54.406731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.601 [2024-12-09 05:37:54.406738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.601 [2024-12-09 05:37:54.406742] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406745] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=4096, cccid=4 00:47:07.601 [2024-12-09 05:37:54.406750] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119ad40) on tqpair(0x1136750): expected_datao=0, payload_size=4096 00:47:07.601 [2024-12-09 05:37:54.406754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406761] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406764] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.601 [2024-12-09 05:37:54.406778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.601 [2024-12-09 05:37:54.406782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.601 [2024-12-09 05:37:54.406802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:47:07.601 [2024-12-09 05:37:54.406822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.601 [2024-12-09 05:37:54.406833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.601 [2024-12-09 05:37:54.406852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.601 [2024-12-09 05:37:54.406906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.601 [2024-12-09 05:37:54.406913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.601 [2024-12-09 05:37:54.406916] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406920] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=4096, cccid=4 00:47:07.601 [2024-12-09 05:37:54.406924] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119ad40) on tqpair(0x1136750): expected_datao=0, payload_size=4096 00:47:07.601 [2024-12-09 05:37:54.406929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406939] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.601 [2024-12-09 05:37:54.406953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.601 [2024-12-09 05:37:54.406956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.601 [2024-12-09 05:37:54.406960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.601 [2024-12-09 05:37:54.406969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.406977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.406987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.406994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.406999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.407004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.407009] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:47:07.602 [2024-12-09 05:37:54.407014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:47:07.602 [2024-12-09 05:37:54.407019] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:47:07.602 [2024-12-09 05:37:54.407033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:47:07.602 [2024-12-09 05:37:54.407088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.602 [2024-12-09 05:37:54.407096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119aec0, cid 5, qid 0 00:47:07.602 [2024-12-09 05:37:54.407156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.602 [2024-12-09 05:37:54.407163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.602 [2024-12-09 05:37:54.407166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.602 [2024-12-09 05:37:54.407176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.602 [2024-12-09 05:37:54.407182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.602 [2024-12-09 05:37:54.407185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119aec0) on tqpair=0x1136750 00:47:07.602 [2024-12-09 05:37:54.407199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119aec0, cid 5, qid 0 00:47:07.602 [2024-12-09 05:37:54.407268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.602 [2024-12-09 05:37:54.407275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.602 [2024-12-09 05:37:54.407279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119aec0) on tqpair=0x1136750 00:47:07.602 [2024-12-09 05:37:54.407293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119aec0, cid 5, qid 0 00:47:07.602 [2024-12-09 05:37:54.407364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.602 [2024-12-09 05:37:54.407371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.602 [2024-12-09 05:37:54.407374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119aec0) on tqpair=0x1136750 00:47:07.602 [2024-12-09 05:37:54.407388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119aec0, cid 5, qid 0 00:47:07.602 [2024-12-09 05:37:54.407456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.602 [2024-12-09 05:37:54.407462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.602 [2024-12-09 05:37:54.407466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119aec0) on tqpair=0x1136750 00:47:07.602 [2024-12-09 05:37:54.407487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1136750) 00:47:07.602 [2024-12-09 05:37:54.407551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.602 [2024-12-09 05:37:54.407583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119aec0, cid 5, qid 0 00:47:07.602 [2024-12-09 05:37:54.407591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ad40, cid 4, qid 0 00:47:07.602 [2024-12-09 05:37:54.407596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b040, cid 6, qid 0 00:47:07.602 [2024-12-09 05:37:54.407616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1c0, cid 7, qid 0 00:47:07.602 [2024-12-09 05:37:54.407755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.602 [2024-12-09 05:37:54.407763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.602 [2024-12-09 05:37:54.407767] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407770] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=8192, cccid=5 00:47:07.602 [2024-12-09 05:37:54.407775] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119aec0) on tqpair(0x1136750): expected_datao=0, payload_size=8192 00:47:07.602 [2024-12-09 05:37:54.407780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407801] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.602 [2024-12-09 05:37:54.407812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.602 [2024-12-09 05:37:54.407816] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=512, cccid=4 00:47:07.602 [2024-12-09 05:37:54.407824] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119ad40) on tqpair(0x1136750): expected_datao=0, payload_size=512 00:47:07.602 [2024-12-09 05:37:54.407828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407834] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407838] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.602 [2024-12-09 05:37:54.407849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.602 [2024-12-09 05:37:54.407852] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407856] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=512, cccid=6 00:47:07.602 [2024-12-09 05:37:54.407860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b040) on tqpair(0x1136750): expected_datao=0, payload_size=512 00:47:07.602 [2024-12-09 05:37:54.407864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407870] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407874] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:07.602 [2024-12-09 05:37:54.407884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:07.602 [2024-12-09 05:37:54.407888] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407892] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1136750): datao=0, datal=4096, cccid=7 00:47:07.602 [2024-12-09 05:37:54.407896] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b1c0) on tqpair(0x1136750): expected_datao=0, payload_size=4096 00:47:07.602 [2024-12-09 05:37:54.407900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407906] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407910] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:07.602 [2024-12-09 05:37:54.407915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.602 [2024-12-09 05:37:54.407921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.602 ===================================================== 00:47:07.602 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:07.602 ===================================================== 00:47:07.602 Controller Capabilities/Features 00:47:07.602 ================================ 00:47:07.602 Vendor ID: 8086 00:47:07.603 Subsystem Vendor ID: 8086 00:47:07.603 Serial Number: SPDK00000000000001 00:47:07.603 Model Number: SPDK bdev Controller 00:47:07.603 Firmware Version: 25.01 00:47:07.603 Recommended Arb Burst: 6 00:47:07.603 IEEE OUI Identifier: e4 d2 5c 00:47:07.603 Multi-path I/O 00:47:07.603 May have multiple subsystem ports: Yes 00:47:07.603 May have multiple controllers: Yes 00:47:07.603 Associated with SR-IOV VF: No 00:47:07.603 Max Data Transfer Size: 131072 00:47:07.603 Max Number of Namespaces: 32 00:47:07.603 Max Number of I/O Queues: 127 00:47:07.603 NVMe Specification Version (VS): 1.3 00:47:07.603 NVMe Specification Version (Identify): 1.3 00:47:07.603 Maximum Queue Entries: 128 00:47:07.603 Contiguous Queues Required: Yes 00:47:07.603 Arbitration Mechanisms Supported 00:47:07.603 Weighted Round Robin: Not Supported 00:47:07.603 Vendor Specific: Not Supported 00:47:07.603 Reset Timeout: 15000 ms 00:47:07.603 Doorbell Stride: 4 bytes 00:47:07.603 NVM Subsystem Reset: Not Supported 00:47:07.603 Command Sets Supported 00:47:07.603 NVM Command Set: Supported 00:47:07.603 Boot Partition: Not Supported 00:47:07.603 Memory Page Size Minimum: 4096 bytes 00:47:07.603 Memory Page Size Maximum: 4096 bytes 00:47:07.603 Persistent Memory Region: Not Supported 00:47:07.603 Optional Asynchronous Events Supported 00:47:07.603 Namespace Attribute Notices: Supported 00:47:07.603 Firmware Activation Notices: Not Supported 00:47:07.603 ANA Change Notices: Not Supported 00:47:07.603 PLE Aggregate Log Change Notices: Not Supported 00:47:07.603 LBA Status Info Alert Notices: Not Supported 00:47:07.603 EGE Aggregate Log Change Notices: Not Supported 00:47:07.603 Normal NVM Subsystem Shutdown event: Not Supported 00:47:07.603 Zone Descriptor Change Notices: Not Supported 00:47:07.603 Discovery Log Change Notices: Not Supported 00:47:07.603 Controller Attributes 00:47:07.603 128-bit Host Identifier: Supported 00:47:07.603 Non-Operational Permissive Mode: Not Supported 00:47:07.603 NVM Sets: Not Supported 00:47:07.603 Read Recovery Levels: Not Supported 00:47:07.603 Endurance Groups: Not Supported 00:47:07.603 Predictable Latency Mode: Not Supported 00:47:07.603 Traffic Based Keep ALive: Not Supported 00:47:07.603 Namespace Granularity: Not Supported 00:47:07.603 SQ Associations: Not Supported 00:47:07.603 UUID List: Not Supported 00:47:07.603 Multi-Domain Subsystem: Not Supported 00:47:07.603 Fixed Capacity Management: Not Supported 00:47:07.603 Variable Capacity Management: Not Supported 00:47:07.603 Delete Endurance Group: Not Supported 00:47:07.603 Delete NVM Set: Not Supported 00:47:07.603 Extended LBA Formats Supported: Not Supported 00:47:07.603 Flexible Data Placement Supported: Not Supported 00:47:07.603 00:47:07.603 Controller Memory Buffer Support 00:47:07.603 ================================ 00:47:07.603 Supported: No 00:47:07.603 00:47:07.603 Persistent Memory Region Support 00:47:07.603 ================================ 00:47:07.603 Supported: No 00:47:07.603 00:47:07.603 Admin Command Set Attributes 00:47:07.603 ============================ 00:47:07.603 Security Send/Receive: Not Supported 00:47:07.603 Format NVM: Not Supported 00:47:07.603 Firmware Activate/Download: Not Supported 00:47:07.603 Namespace Management: Not Supported 00:47:07.603 Device Self-Test: Not Supported 00:47:07.603 Directives: Not Supported 00:47:07.603 NVMe-MI: Not Supported 00:47:07.603 Virtualization Management: Not Supported 00:47:07.603 Doorbell Buffer Config: Not Supported 00:47:07.603 Get LBA Status Capability: Not Supported 00:47:07.603 Command & Feature Lockdown Capability: Not Supported 00:47:07.603 Abort Command Limit: 4 00:47:07.603 Async Event Request Limit: 4 00:47:07.603 Number of Firmware Slots: N/A 00:47:07.603 Firmware Slot 1 Read-Only: N/A 00:47:07.603 Firmware Activation Without Reset: N/A 00:47:07.603 Multiple Update Detection Support: N/A 00:47:07.603 Firmware Update Granularity: No Information Provided 00:47:07.603 Per-Namespace SMART Log: No 00:47:07.603 Asymmetric Namespace Access Log Page: Not Supported 00:47:07.603 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:47:07.603 Command Effects Log Page: Supported 00:47:07.603 Get Log Page Extended Data: Supported 00:47:07.603 Telemetry Log Pages: Not Supported 00:47:07.603 Persistent Event Log Pages: Not Supported 00:47:07.603 Supported Log Pages Log Page: May Support 00:47:07.603 Commands Supported & Effects Log Page: Not Supported 00:47:07.603 Feature Identifiers & Effects Log Page:May Support 00:47:07.603 NVMe-MI Commands & Effects Log Page: May Support 00:47:07.603 Data Area 4 for Telemetry Log: Not Supported 00:47:07.603 Error Log Page Entries Supported: 128 00:47:07.603 Keep Alive: Supported 00:47:07.603 Keep Alive Granularity: 10000 ms 00:47:07.603 00:47:07.603 NVM Command Set Attributes 00:47:07.603 ========================== 00:47:07.603 Submission Queue Entry Size 00:47:07.603 Max: 64 00:47:07.603 Min: 64 00:47:07.603 Completion Queue Entry Size 00:47:07.603 Max: 16 00:47:07.603 Min: 16 00:47:07.603 Number of Namespaces: 32 00:47:07.603 Compare Command: Supported 00:47:07.603 Write Uncorrectable Command: Not Supported 00:47:07.603 Dataset Management Command: Supported 00:47:07.603 Write Zeroes Command: Supported 00:47:07.603 Set Features Save Field: Not Supported 00:47:07.603 Reservations: Supported 00:47:07.603 Timestamp: Not Supported 00:47:07.603 Copy: Supported 00:47:07.603 Volatile Write Cache: Present 00:47:07.603 Atomic Write Unit (Normal): 1 00:47:07.603 Atomic Write Unit (PFail): 1 00:47:07.603 Atomic Compare & Write Unit: 1 00:47:07.603 Fused Compare & Write: Supported 00:47:07.603 Scatter-Gather List 00:47:07.603 SGL Command Set: Supported 00:47:07.603 SGL Keyed: Supported 00:47:07.603 SGL Bit Bucket Descriptor: Not Supported 00:47:07.603 SGL Metadata Pointer: Not Supported 00:47:07.603 Oversized SGL: Not Supported 00:47:07.603 SGL Metadata Address: Not Supported 00:47:07.603 SGL Offset: Supported 00:47:07.603 Transport SGL Data Block: Not Supported 00:47:07.603 Replay Protected Memory Block: Not Supported 00:47:07.603 00:47:07.603 Firmware Slot Information 00:47:07.603 ========================= 00:47:07.603 Active slot: 1 00:47:07.603 Slot 1 Firmware Revision: 25.01 00:47:07.603 00:47:07.603 00:47:07.603 Commands Supported and Effects 00:47:07.603 ============================== 00:47:07.603 Admin Commands 00:47:07.603 -------------- 00:47:07.603 Get Log Page (02h): Supported 00:47:07.603 Identify (06h): Supported 00:47:07.603 Abort (08h): Supported 00:47:07.603 Set Features (09h): Supported 00:47:07.603 Get Features (0Ah): Supported 00:47:07.603 Asynchronous Event Request (0Ch): Supported 00:47:07.603 Keep Alive (18h): Supported 00:47:07.603 I/O Commands 00:47:07.603 ------------ 00:47:07.603 Flush (00h): Supported LBA-Change 00:47:07.603 Write (01h): Supported LBA-Change 00:47:07.603 Read (02h): Supported 00:47:07.603 Compare (05h): Supported 00:47:07.603 Write Zeroes (08h): Supported LBA-Change 00:47:07.603 Dataset Management (09h): Supported LBA-Change 00:47:07.603 Copy (19h): Supported LBA-Change 00:47:07.603 00:47:07.603 Error Log 00:47:07.603 ========= 00:47:07.603 00:47:07.603 Arbitration 00:47:07.603 =========== 00:47:07.603 Arbitration Burst: 1 00:47:07.603 00:47:07.603 Power Management 00:47:07.603 ================ 00:47:07.603 Number of Power States: 1 00:47:07.603 Current Power State: Power State #0 00:47:07.603 Power State #0: 00:47:07.603 Max Power: 0.00 W 00:47:07.603 Non-Operational State: Operational 00:47:07.603 Entry Latency: Not Reported 00:47:07.603 Exit Latency: Not Reported 00:47:07.603 Relative Read Throughput: 0 00:47:07.603 Relative Read Latency: 0 00:47:07.603 Relative Write Throughput: 0 00:47:07.603 Relative Write Latency: 0 00:47:07.603 Idle Power: Not Reported 00:47:07.603 Active Power: Not Reported 00:47:07.603 Non-Operational Permissive Mode: Not Supported 00:47:07.603 00:47:07.603 Health Information 00:47:07.603 ================== 00:47:07.603 Critical Warnings: 00:47:07.603 Available Spare Space: OK 00:47:07.603 Temperature: OK 00:47:07.603 Device Reliability: OK 00:47:07.603 Read Only: No 00:47:07.603 Volatile Memory Backup: OK 00:47:07.603 Current Temperature: 0 Kelvin (-273 Celsius) 00:47:07.603 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:47:07.603 Available Spare: 0% 00:47:07.603 Available Spare Threshold: 0% 00:47:07.603 Life Percentage Used:[2024-12-09 05:37:54.407924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.603 [2024-12-09 05:37:54.407928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119aec0) on tqpair=0x1136750 00:47:07.603 [2024-12-09 05:37:54.407943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.407950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.407954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.407958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119ad40) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.407969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.407990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.407993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.407997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119b040) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.408009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.408012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119b1c0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1136750) 00:47:07.604 [2024-12-09 05:37:54.408123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.604 [2024-12-09 05:37:54.408145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1c0, cid 7, qid 0 00:47:07.604 [2024-12-09 05:37:54.408192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.408199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.408202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119b1c0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408242] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:47:07.604 [2024-12-09 05:37:54.408253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a740) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.604 [2024-12-09 05:37:54.408265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119a8c0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.604 [2024-12-09 05:37:54.408274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119aa40) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.604 [2024-12-09 05:37:54.408284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119abc0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:07.604 [2024-12-09 05:37:54.408296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1136750) 00:47:07.604 [2024-12-09 05:37:54.408311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.604 [2024-12-09 05:37:54.408332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119abc0, cid 3, qid 0 00:47:07.604 [2024-12-09 05:37:54.408372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.408379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.408382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119abc0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1136750) 00:47:07.604 [2024-12-09 05:37:54.408408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.604 [2024-12-09 05:37:54.408428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119abc0, cid 3, qid 0 00:47:07.604 [2024-12-09 05:37:54.408484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.408491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.408494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119abc0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.408503] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:47:07.604 [2024-12-09 05:37:54.408507] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:47:07.604 [2024-12-09 05:37:54.408517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.408525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1136750) 00:47:07.604 [2024-12-09 05:37:54.408532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.604 [2024-12-09 05:37:54.408548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119abc0, cid 3, qid 0 00:47:07.604 [2024-12-09 05:37:54.412626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.412643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.412648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.412652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119abc0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.412665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.412670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.412673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1136750) 00:47:07.604 [2024-12-09 05:37:54.412681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:07.604 [2024-12-09 05:37:54.412705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119abc0, cid 3, qid 0 00:47:07.604 [2024-12-09 05:37:54.412750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:07.604 [2024-12-09 05:37:54.412756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:07.604 [2024-12-09 05:37:54.412760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:07.604 [2024-12-09 05:37:54.412764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x119abc0) on tqpair=0x1136750 00:47:07.604 [2024-12-09 05:37:54.412771] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:47:07.604 0% 00:47:07.604 Data Units Read: 0 00:47:07.604 Data Units Written: 0 00:47:07.604 Host Read Commands: 0 00:47:07.604 Host Write Commands: 0 00:47:07.604 Controller Busy Time: 0 minutes 00:47:07.604 Power Cycles: 0 00:47:07.604 Power On Hours: 0 hours 00:47:07.604 Unsafe Shutdowns: 0 00:47:07.604 Unrecoverable Media Errors: 0 00:47:07.604 Lifetime Error Log Entries: 0 00:47:07.604 Warning Temperature Time: 0 minutes 00:47:07.604 Critical Temperature Time: 0 minutes 00:47:07.604 00:47:07.604 Number of Queues 00:47:07.604 ================ 00:47:07.604 Number of I/O Submission Queues: 127 00:47:07.604 Number of I/O Completion Queues: 127 00:47:07.604 00:47:07.604 Active Namespaces 00:47:07.604 ================= 00:47:07.604 Namespace ID:1 00:47:07.604 Error Recovery Timeout: Unlimited 00:47:07.604 Command Set Identifier: NVM (00h) 00:47:07.604 Deallocate: Supported 00:47:07.604 Deallocated/Unwritten Error: Not Supported 00:47:07.604 Deallocated Read Value: Unknown 00:47:07.604 Deallocate in Write Zeroes: Not Supported 00:47:07.604 Deallocated Guard Field: 0xFFFF 00:47:07.604 Flush: Supported 00:47:07.604 Reservation: Supported 00:47:07.604 Namespace Sharing Capabilities: Multiple Controllers 00:47:07.604 Size (in LBAs): 131072 (0GiB) 00:47:07.604 Capacity (in LBAs): 131072 (0GiB) 00:47:07.604 Utilization (in LBAs): 131072 (0GiB) 00:47:07.604 NGUID: ABCDEF0123456789ABCDEF0123456789 00:47:07.604 EUI64: ABCDEF0123456789 00:47:07.604 UUID: 39588ada-e178-4b1a-a256-f069028d2189 00:47:07.604 Thin Provisioning: Not Supported 00:47:07.604 Per-NS Atomic Units: Yes 00:47:07.604 Atomic Boundary Size (Normal): 0 00:47:07.604 Atomic Boundary Size (PFail): 0 00:47:07.604 Atomic Boundary Offset: 0 00:47:07.605 Maximum Single Source Range Length: 65535 00:47:07.605 Maximum Copy Length: 65535 00:47:07.605 Maximum Source Range Count: 1 00:47:07.605 NGUID/EUI64 Never Reused: No 00:47:07.605 Namespace Write Protected: No 00:47:07.605 Number of LBA Formats: 1 00:47:07.605 Current LBA Format: LBA Format #00 00:47:07.605 LBA Format #00: Data Size: 512 Metadata Size: 0 00:47:07.605 00:47:07.605 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:07.864 rmmod nvme_tcp 00:47:07.864 rmmod nvme_fabrics 00:47:07.864 rmmod nvme_keyring 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73884 ']' 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73884 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73884 ']' 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73884 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73884 00:47:07.864 killing process with pid 73884 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73884' 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73884 00:47:07.864 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73884 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:08.123 05:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:08.123 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:08.123 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:08.123 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:08.123 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:08.123 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:08.124 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:08.124 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:47:08.396 00:47:08.396 real 0m2.323s 00:47:08.396 user 0m5.001s 00:47:08.396 sys 0m0.691s 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:08.396 ************************************ 00:47:08.396 END TEST nvmf_identify 00:47:08.396 ************************************ 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:47:08.396 ************************************ 00:47:08.396 START TEST nvmf_perf 00:47:08.396 ************************************ 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:47:08.396 * Looking for test storage... 00:47:08.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:47:08.396 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:08.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:08.397 --rc genhtml_branch_coverage=1 00:47:08.397 --rc genhtml_function_coverage=1 00:47:08.397 --rc genhtml_legend=1 00:47:08.397 --rc geninfo_all_blocks=1 00:47:08.397 --rc geninfo_unexecuted_blocks=1 00:47:08.397 00:47:08.397 ' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:08.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:08.397 --rc genhtml_branch_coverage=1 00:47:08.397 --rc genhtml_function_coverage=1 00:47:08.397 --rc genhtml_legend=1 00:47:08.397 --rc geninfo_all_blocks=1 00:47:08.397 --rc geninfo_unexecuted_blocks=1 00:47:08.397 00:47:08.397 ' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:08.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:08.397 --rc genhtml_branch_coverage=1 00:47:08.397 --rc genhtml_function_coverage=1 00:47:08.397 --rc genhtml_legend=1 00:47:08.397 --rc geninfo_all_blocks=1 00:47:08.397 --rc geninfo_unexecuted_blocks=1 00:47:08.397 00:47:08.397 ' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:08.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:08.397 --rc genhtml_branch_coverage=1 00:47:08.397 --rc genhtml_function_coverage=1 00:47:08.397 --rc genhtml_legend=1 00:47:08.397 --rc geninfo_all_blocks=1 00:47:08.397 --rc geninfo_unexecuted_blocks=1 00:47:08.397 00:47:08.397 ' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:08.397 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:08.397 Cannot find device "nvmf_init_br" 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:47:08.397 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:08.655 Cannot find device "nvmf_init_br2" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:08.655 Cannot find device "nvmf_tgt_br" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:08.655 Cannot find device "nvmf_tgt_br2" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:08.655 Cannot find device "nvmf_init_br" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:08.655 Cannot find device "nvmf_init_br2" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:08.655 Cannot find device "nvmf_tgt_br" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:08.655 Cannot find device "nvmf_tgt_br2" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:08.655 Cannot find device "nvmf_br" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:08.655 Cannot find device "nvmf_init_if" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:08.655 Cannot find device "nvmf_init_if2" 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:08.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:08.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:08.655 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:08.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:08.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:47:08.914 00:47:08.914 --- 10.0.0.3 ping statistics --- 00:47:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:08.914 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:08.914 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:08.914 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:47:08.914 00:47:08.914 --- 10.0.0.4 ping statistics --- 00:47:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:08.914 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:08.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:08.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:47:08.914 00:47:08.914 --- 10.0.0.1 ping statistics --- 00:47:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:08.914 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:08.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:08.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:47:08.914 00:47:08.914 --- 10.0.0.2 ping statistics --- 00:47:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:08.914 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74135 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74135 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74135 ']' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:08.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:08.914 05:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:08.914 [2024-12-09 05:37:55.800676] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:08.914 [2024-12-09 05:37:55.800946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:09.174 [2024-12-09 05:37:55.959696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:09.174 [2024-12-09 05:37:56.001664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:09.174 [2024-12-09 05:37:56.001962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:09.174 [2024-12-09 05:37:56.002167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:09.174 [2024-12-09 05:37:56.002347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:09.174 [2024-12-09 05:37:56.002394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:09.174 [2024-12-09 05:37:56.003477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:09.174 [2024-12-09 05:37:56.003608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:09.174 [2024-12-09 05:37:56.003663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:09.174 [2024-12-09 05:37:56.003667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:09.174 [2024-12-09 05:37:56.042328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:09.174 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:47:09.742 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:47:09.742 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:47:10.002 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:47:10.002 05:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:10.261 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:47:10.261 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:47:10.261 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:47:10.261 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:47:10.261 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:47:10.519 [2024-12-09 05:37:57.342300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:10.519 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:10.777 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:47:10.777 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:11.035 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:47:11.035 05:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:47:11.294 05:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:11.570 [2024-12-09 05:37:58.343547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:11.570 05:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:47:11.836 05:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:47:11.836 05:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:11.837 05:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:47:11.837 05:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:13.214 Initializing NVMe Controllers 00:47:13.214 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:13.214 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:13.214 Initialization complete. Launching workers. 00:47:13.214 ======================================================== 00:47:13.214 Latency(us) 00:47:13.214 Device Information : IOPS MiB/s Average min max 00:47:13.214 PCIE (0000:00:10.0) NSID 1 from core 0: 22144.89 86.50 1445.03 355.40 8904.47 00:47:13.214 ======================================================== 00:47:13.214 Total : 22144.89 86.50 1445.03 355.40 8904.47 00:47:13.214 00:47:13.214 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:47:14.586 Initializing NVMe Controllers 00:47:14.586 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:14.586 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:14.586 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:14.586 Initialization complete. Launching workers. 00:47:14.586 ======================================================== 00:47:14.586 Latency(us) 00:47:14.586 Device Information : IOPS MiB/s Average min max 00:47:14.586 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4007.26 15.65 248.21 94.72 7228.95 00:47:14.586 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.52 0.48 8161.88 4920.98 12067.29 00:47:14.586 ======================================================== 00:47:14.586 Total : 4129.78 16.13 482.98 94.72 12067.29 00:47:14.586 00:47:14.586 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:47:15.960 Initializing NVMe Controllers 00:47:15.960 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:15.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:15.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:15.960 Initialization complete. Launching workers. 00:47:15.960 ======================================================== 00:47:15.960 Latency(us) 00:47:15.960 Device Information : IOPS MiB/s Average min max 00:47:15.960 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9424.74 36.82 3395.01 456.67 7615.51 00:47:15.960 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3989.95 15.59 8033.54 6876.40 12591.53 00:47:15.960 ======================================================== 00:47:15.960 Total : 13414.69 52.40 4774.66 456.67 12591.53 00:47:15.960 00:47:15.960 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:47:15.960 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:47:18.492 Initializing NVMe Controllers 00:47:18.492 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:18.492 Controller IO queue size 128, less than required. 00:47:18.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:18.492 Controller IO queue size 128, less than required. 00:47:18.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:18.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:18.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:18.492 Initialization complete. Launching workers. 00:47:18.492 ======================================================== 00:47:18.492 Latency(us) 00:47:18.492 Device Information : IOPS MiB/s Average min max 00:47:18.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1991.98 497.99 65452.07 36581.42 98574.15 00:47:18.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 670.49 167.62 193914.46 50403.38 337671.11 00:47:18.492 ======================================================== 00:47:18.492 Total : 2662.47 665.62 97802.88 36581.42 337671.11 00:47:18.492 00:47:18.492 05:38:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:47:18.751 Initializing NVMe Controllers 00:47:18.751 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:18.751 Controller IO queue size 128, less than required. 00:47:18.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:18.751 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:47:18.751 Controller IO queue size 128, less than required. 00:47:18.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:18.751 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:47:18.751 WARNING: Some requested NVMe devices were skipped 00:47:18.751 No valid NVMe controllers or AIO or URING devices found 00:47:19.010 05:38:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:47:21.546 Initializing NVMe Controllers 00:47:21.546 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:21.546 Controller IO queue size 128, less than required. 00:47:21.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:21.546 Controller IO queue size 128, less than required. 00:47:21.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:21.546 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:21.546 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:21.546 Initialization complete. Launching workers. 00:47:21.546 00:47:21.546 ==================== 00:47:21.546 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:47:21.546 TCP transport: 00:47:21.546 polls: 11610 00:47:21.546 idle_polls: 7080 00:47:21.546 sock_completions: 4530 00:47:21.546 nvme_completions: 6965 00:47:21.546 submitted_requests: 10440 00:47:21.546 queued_requests: 1 00:47:21.546 00:47:21.546 ==================== 00:47:21.546 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:47:21.546 TCP transport: 00:47:21.546 polls: 11892 00:47:21.546 idle_polls: 6967 00:47:21.546 sock_completions: 4925 00:47:21.546 nvme_completions: 6955 00:47:21.546 submitted_requests: 10398 00:47:21.546 queued_requests: 1 00:47:21.546 ======================================================== 00:47:21.546 Latency(us) 00:47:21.546 Device Information : IOPS MiB/s Average min max 00:47:21.546 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1741.00 435.25 74163.03 44649.27 111367.71 00:47:21.546 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1738.50 434.62 74323.36 21417.16 129614.02 00:47:21.546 ======================================================== 00:47:21.546 Total : 3479.49 869.87 74243.14 21417.16 129614.02 00:47:21.546 00:47:21.546 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:47:21.546 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:21.806 rmmod nvme_tcp 00:47:21.806 rmmod nvme_fabrics 00:47:21.806 rmmod nvme_keyring 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74135 ']' 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74135 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74135 ']' 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74135 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:21.806 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74135 00:47:22.066 killing process with pid 74135 00:47:22.066 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:22.066 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:22.066 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74135' 00:47:22.066 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74135 00:47:22.066 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74135 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:22.326 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:47:22.585 ************************************ 00:47:22.585 END TEST nvmf_perf 00:47:22.585 ************************************ 00:47:22.585 00:47:22.585 real 0m14.317s 00:47:22.585 user 0m51.837s 00:47:22.585 sys 0m3.833s 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:47:22.585 ************************************ 00:47:22.585 START TEST nvmf_fio_host 00:47:22.585 ************************************ 00:47:22.585 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:47:22.845 * Looking for test storage... 00:47:22.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:22.845 --rc genhtml_branch_coverage=1 00:47:22.845 --rc genhtml_function_coverage=1 00:47:22.845 --rc genhtml_legend=1 00:47:22.845 --rc geninfo_all_blocks=1 00:47:22.845 --rc geninfo_unexecuted_blocks=1 00:47:22.845 00:47:22.845 ' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:22.845 --rc genhtml_branch_coverage=1 00:47:22.845 --rc genhtml_function_coverage=1 00:47:22.845 --rc genhtml_legend=1 00:47:22.845 --rc geninfo_all_blocks=1 00:47:22.845 --rc geninfo_unexecuted_blocks=1 00:47:22.845 00:47:22.845 ' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:22.845 --rc genhtml_branch_coverage=1 00:47:22.845 --rc genhtml_function_coverage=1 00:47:22.845 --rc genhtml_legend=1 00:47:22.845 --rc geninfo_all_blocks=1 00:47:22.845 --rc geninfo_unexecuted_blocks=1 00:47:22.845 00:47:22.845 ' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:22.845 --rc genhtml_branch_coverage=1 00:47:22.845 --rc genhtml_function_coverage=1 00:47:22.845 --rc genhtml_legend=1 00:47:22.845 --rc geninfo_all_blocks=1 00:47:22.845 --rc geninfo_unexecuted_blocks=1 00:47:22.845 00:47:22.845 ' 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:22.845 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:22.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:22.846 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:22.847 Cannot find device "nvmf_init_br" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:22.847 Cannot find device "nvmf_init_br2" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:22.847 Cannot find device "nvmf_tgt_br" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:22.847 Cannot find device "nvmf_tgt_br2" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:22.847 Cannot find device "nvmf_init_br" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:22.847 Cannot find device "nvmf_init_br2" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:22.847 Cannot find device "nvmf_tgt_br" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:22.847 Cannot find device "nvmf_tgt_br2" 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:47:22.847 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:23.106 Cannot find device "nvmf_br" 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:23.106 Cannot find device "nvmf_init_if" 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:23.106 Cannot find device "nvmf_init_if2" 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:23.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:23.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:23.106 05:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:23.106 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:23.106 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:23.106 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:23.106 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:23.106 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:23.107 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:23.107 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:23.107 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:23.107 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:23.107 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:23.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:23.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:47:23.365 00:47:23.365 --- 10.0.0.3 ping statistics --- 00:47:23.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.365 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:23.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:23.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:47:23.365 00:47:23.365 --- 10.0.0.4 ping statistics --- 00:47:23.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.365 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:23.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:23.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:47:23.365 00:47:23.365 --- 10.0.0.1 ping statistics --- 00:47:23.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.365 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:23.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:23.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:47:23.365 00:47:23.365 --- 10.0.0.2 ping statistics --- 00:47:23.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.365 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74588 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74588 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74588 ']' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:23.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:23.365 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:47:23.365 [2024-12-09 05:38:10.218326] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:23.365 [2024-12-09 05:38:10.218588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:23.624 [2024-12-09 05:38:10.372040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:23.624 [2024-12-09 05:38:10.411599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:23.624 [2024-12-09 05:38:10.411665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:23.624 [2024-12-09 05:38:10.411680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:23.624 [2024-12-09 05:38:10.411690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:23.624 [2024-12-09 05:38:10.411699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:23.624 [2024-12-09 05:38:10.412612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:23.624 [2024-12-09 05:38:10.412855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:23.624 [2024-12-09 05:38:10.413402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:23.624 [2024-12-09 05:38:10.413442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:23.624 [2024-12-09 05:38:10.449989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:23.624 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:23.624 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:47:23.624 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:47:23.882 [2024-12-09 05:38:10.791115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:23.882 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:47:23.882 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:23.882 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:47:24.141 05:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:47:24.399 Malloc1 00:47:24.399 05:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:24.656 05:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:24.914 05:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:25.176 [2024-12-09 05:38:11.953491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:25.176 05:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:47:25.434 05:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:47:25.434 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:47:25.434 fio-3.35 00:47:25.434 Starting 1 thread 00:47:27.965 00:47:27.965 test: (groupid=0, jobs=1): err= 0: pid=74668: Mon Dec 9 05:38:14 2024 00:47:27.965 read: IOPS=9378, BW=36.6MiB/s (38.4MB/s)(73.5MiB/2006msec) 00:47:27.965 slat (nsec): min=1770, max=333228, avg=2255.48, stdev=3226.07 00:47:27.965 clat (usec): min=2601, max=12865, avg=7117.46, stdev=549.62 00:47:27.965 lat (usec): min=2640, max=12867, avg=7119.71, stdev=549.42 00:47:27.965 clat percentiles (usec): 00:47:27.965 | 1.00th=[ 6063], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6718], 00:47:27.965 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7177], 00:47:27.965 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7963], 00:47:27.965 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[11076], 99.95th=[11994], 00:47:27.965 | 99.99th=[12780] 00:47:27.965 bw ( KiB/s): min=37072, max=37944, per=99.93%, avg=37486.00, stdev=427.40, samples=4 00:47:27.965 iops : min= 9268, max= 9486, avg=9371.50, stdev=106.85, samples=4 00:47:27.965 write: IOPS=9380, BW=36.6MiB/s (38.4MB/s)(73.5MiB/2006msec); 0 zone resets 00:47:27.965 slat (nsec): min=1867, max=251995, avg=2347.05, stdev=2386.88 00:47:27.965 clat (usec): min=2463, max=12029, avg=6481.25, stdev=505.09 00:47:27.965 lat (usec): min=2477, max=12031, avg=6483.60, stdev=504.97 00:47:27.965 clat percentiles (usec): 00:47:27.965 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:47:27.965 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:47:27.965 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7242], 00:47:27.965 | 99.00th=[ 8029], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[11076], 00:47:27.965 | 99.99th=[11994] 00:47:27.965 bw ( KiB/s): min=36992, max=37896, per=100.00%, avg=37522.00, stdev=401.72, samples=4 00:47:27.965 iops : min= 9248, max= 9474, avg=9380.50, stdev=100.43, samples=4 00:47:27.965 lat (msec) : 4=0.08%, 10=99.76%, 20=0.15% 00:47:27.965 cpu : usr=72.37%, sys=21.05%, ctx=8, majf=0, minf=7 00:47:27.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:47:27.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:27.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:27.965 issued rwts: total=18813,18818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:27.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:27.965 00:47:27.965 Run status group 0 (all jobs): 00:47:27.965 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.5MiB (77.1MB), run=2006-2006msec 00:47:27.965 WRITE: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.5MiB (77.1MB), run=2006-2006msec 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:47:27.965 05:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:47:28.234 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:47:28.234 fio-3.35 00:47:28.234 Starting 1 thread 00:47:30.784 00:47:30.784 test: (groupid=0, jobs=1): err= 0: pid=74712: Mon Dec 9 05:38:17 2024 00:47:30.784 read: IOPS=8834, BW=138MiB/s (145MB/s)(277MiB/2007msec) 00:47:30.784 slat (usec): min=2, max=106, avg= 3.46, stdev= 2.24 00:47:30.784 clat (usec): min=2027, max=16379, avg=8225.22, stdev=2553.07 00:47:30.784 lat (usec): min=2030, max=16382, avg=8228.69, stdev=2553.13 00:47:30.784 clat percentiles (usec): 00:47:30.784 | 1.00th=[ 3720], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5932], 00:47:30.784 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 7963], 60.00th=[ 8717], 00:47:30.784 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11600], 95.00th=[13042], 00:47:30.784 | 99.00th=[14877], 99.50th=[15401], 99.90th=[15926], 99.95th=[16057], 00:47:30.784 | 99.99th=[16319] 00:47:30.784 bw ( KiB/s): min=65984, max=74730, per=49.78%, avg=70362.50, stdev=3976.59, samples=4 00:47:30.784 iops : min= 4124, max= 4670, avg=4397.50, stdev=248.31, samples=4 00:47:30.784 write: IOPS=5074, BW=79.3MiB/s (83.1MB/s)(144MiB/1810msec); 0 zone resets 00:47:30.784 slat (usec): min=31, max=348, avg=35.99, stdev= 9.32 00:47:30.784 clat (usec): min=5845, max=19963, avg=11243.90, stdev=2110.02 00:47:30.784 lat (usec): min=5878, max=20002, avg=11279.89, stdev=2111.33 00:47:30.784 clat percentiles (usec): 00:47:30.784 | 1.00th=[ 7308], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:47:30.784 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:47:30.784 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[15008], 00:47:30.784 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[19006], 00:47:30.784 | 99.99th=[20055] 00:47:30.784 bw ( KiB/s): min=68640, max=78690, per=90.45%, avg=73432.50, stdev=4618.41, samples=4 00:47:30.784 iops : min= 4290, max= 4918, avg=4589.50, stdev=288.60, samples=4 00:47:30.784 lat (msec) : 4=1.09%, 10=59.94%, 20=38.97% 00:47:30.784 cpu : usr=83.20%, sys=12.86%, ctx=6, majf=0, minf=8 00:47:30.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:47:30.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:30.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:30.784 issued rwts: total=17731,9184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:30.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:30.784 00:47:30.784 Run status group 0 (all jobs): 00:47:30.784 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=277MiB (291MB), run=2007-2007msec 00:47:30.784 WRITE: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=144MiB (150MB), run=1810-1810msec 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:30.784 rmmod nvme_tcp 00:47:30.784 rmmod nvme_fabrics 00:47:30.784 rmmod nvme_keyring 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74588 ']' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74588 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74588 ']' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74588 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74588 00:47:30.784 killing process with pid 74588 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74588' 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74588 00:47:30.784 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74588 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:31.043 05:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:47:31.302 ************************************ 00:47:31.302 END TEST nvmf_fio_host 00:47:31.302 ************************************ 00:47:31.302 00:47:31.302 real 0m8.597s 00:47:31.302 user 0m34.478s 00:47:31.302 sys 0m2.227s 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:47:31.302 ************************************ 00:47:31.302 START TEST nvmf_failover 00:47:31.302 ************************************ 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:47:31.302 * Looking for test storage... 00:47:31.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:31.302 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.562 --rc genhtml_branch_coverage=1 00:47:31.562 --rc genhtml_function_coverage=1 00:47:31.562 --rc genhtml_legend=1 00:47:31.562 --rc geninfo_all_blocks=1 00:47:31.562 --rc geninfo_unexecuted_blocks=1 00:47:31.562 00:47:31.562 ' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.562 --rc genhtml_branch_coverage=1 00:47:31.562 --rc genhtml_function_coverage=1 00:47:31.562 --rc genhtml_legend=1 00:47:31.562 --rc geninfo_all_blocks=1 00:47:31.562 --rc geninfo_unexecuted_blocks=1 00:47:31.562 00:47:31.562 ' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.562 --rc genhtml_branch_coverage=1 00:47:31.562 --rc genhtml_function_coverage=1 00:47:31.562 --rc genhtml_legend=1 00:47:31.562 --rc geninfo_all_blocks=1 00:47:31.562 --rc geninfo_unexecuted_blocks=1 00:47:31.562 00:47:31.562 ' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.562 --rc genhtml_branch_coverage=1 00:47:31.562 --rc genhtml_function_coverage=1 00:47:31.562 --rc genhtml_legend=1 00:47:31.562 --rc geninfo_all_blocks=1 00:47:31.562 --rc geninfo_unexecuted_blocks=1 00:47:31.562 00:47:31.562 ' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:31.562 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:31.562 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:31.563 Cannot find device "nvmf_init_br" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:31.563 Cannot find device "nvmf_init_br2" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:31.563 Cannot find device "nvmf_tgt_br" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:31.563 Cannot find device "nvmf_tgt_br2" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:31.563 Cannot find device "nvmf_init_br" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:31.563 Cannot find device "nvmf_init_br2" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:31.563 Cannot find device "nvmf_tgt_br" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:31.563 Cannot find device "nvmf_tgt_br2" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:31.563 Cannot find device "nvmf_br" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:31.563 Cannot find device "nvmf_init_if" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:31.563 Cannot find device "nvmf_init_if2" 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:31.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:31.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:31.563 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:31.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:31.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:47:31.823 00:47:31.823 --- 10.0.0.3 ping statistics --- 00:47:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:31.823 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:31.823 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:31.823 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:47:31.823 00:47:31.823 --- 10.0.0.4 ping statistics --- 00:47:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:31.823 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:31.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:31.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:47:31.823 00:47:31.823 --- 10.0.0.1 ping statistics --- 00:47:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:31.823 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:31.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:31.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:47:31.823 00:47:31.823 --- 10.0.0.2 ping statistics --- 00:47:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:31.823 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:47:31.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74985 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74985 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74985 ']' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:31.823 05:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:47:32.082 [2024-12-09 05:38:18.822777] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:32.082 [2024-12-09 05:38:18.823031] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:32.082 [2024-12-09 05:38:18.973683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:32.082 [2024-12-09 05:38:19.000874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:32.082 [2024-12-09 05:38:19.001153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:32.082 [2024-12-09 05:38:19.001332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:32.082 [2024-12-09 05:38:19.001454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:32.082 [2024-12-09 05:38:19.001491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:32.082 [2024-12-09 05:38:19.002384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:32.082 [2024-12-09 05:38:19.002478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:32.082 [2024-12-09 05:38:19.002475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:32.082 [2024-12-09 05:38:19.030257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:32.341 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:47:32.600 [2024-12-09 05:38:19.401257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:32.600 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:47:32.859 Malloc0 00:47:32.859 05:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:33.118 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:33.401 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:33.660 [2024-12-09 05:38:20.532711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:33.660 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:47:33.918 [2024-12-09 05:38:20.764771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:47:33.918 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:47:34.176 [2024-12-09 05:38:20.980912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:47:34.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75035 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75035 /var/tmp/bdevperf.sock 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75035 ']' 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:34.176 05:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:47:34.435 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:34.435 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:47:34.435 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:47:34.692 NVMe0n1 00:47:34.692 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:47:35.258 00:47:35.258 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75051 00:47:35.258 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:35.258 05:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:47:36.193 05:38:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:36.452 05:38:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:47:39.737 05:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:47:39.737 00:47:39.737 05:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:47:39.994 05:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:47:43.299 05:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:43.299 [2024-12-09 05:38:30.135574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:43.299 05:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:47:44.235 05:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:47:44.494 05:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75051 00:47:51.063 { 00:47:51.063 "results": [ 00:47:51.063 { 00:47:51.063 "job": "NVMe0n1", 00:47:51.063 "core_mask": "0x1", 00:47:51.063 "workload": "verify", 00:47:51.063 "status": "finished", 00:47:51.063 "verify_range": { 00:47:51.063 "start": 0, 00:47:51.063 "length": 16384 00:47:51.063 }, 00:47:51.063 "queue_depth": 128, 00:47:51.063 "io_size": 4096, 00:47:51.063 "runtime": 15.00745, 00:47:51.063 "iops": 9950.791107083482, 00:47:51.063 "mibps": 38.87027776204485, 00:47:51.063 "io_failed": 3717, 00:47:51.063 "io_timeout": 0, 00:47:51.063 "avg_latency_us": 12522.08236514624, 00:47:51.063 "min_latency_us": 603.2290909090909, 00:47:51.063 "max_latency_us": 14894.545454545454 00:47:51.063 } 00:47:51.063 ], 00:47:51.063 "core_count": 1 00:47:51.063 } 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75035 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75035 ']' 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75035 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75035 00:47:51.063 killing process with pid 75035 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75035' 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75035 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75035 00:47:51.063 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:47:51.063 [2024-12-09 05:38:21.043369] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:51.063 [2024-12-09 05:38:21.043475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75035 ] 00:47:51.063 [2024-12-09 05:38:21.180769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:51.063 [2024-12-09 05:38:21.209545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:51.063 [2024-12-09 05:38:21.237130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:51.063 Running I/O for 15 seconds... 00:47:51.063 7700.00 IOPS, 30.08 MiB/s [2024-12-09T05:38:38.035Z] [2024-12-09 05:38:23.225172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.063 [2024-12-09 05:38:23.225227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.063 [2024-12-09 05:38:23.225277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.063 [2024-12-09 05:38:23.225304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.063 [2024-12-09 05:38:23.225331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615c60 is same with the state(6) to be set 00:47:51.063 [2024-12-09 05:38:23.225602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.063 [2024-12-09 05:38:23.225629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.225985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.225999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.226014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.226028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.226043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.063 [2024-12-09 05:38:23.226056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.063 [2024-12-09 05:38:23.226071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.226971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.226998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.064 [2024-12-09 05:38:23.227328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.064 [2024-12-09 05:38:23.227341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.227985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.227999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.065 [2024-12-09 05:38:23.228573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.065 [2024-12-09 05:38:23.228586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.228969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.228983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.229021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.229050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.229079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.229107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.066 [2024-12-09 05:38:23.229552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:23.229591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x684f40 is same with the state(6) to be set 00:47:51.066 [2024-12-09 05:38:23.229625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.066 [2024-12-09 05:38:23.229636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.066 [2024-12-09 05:38:23.229646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:47:51.066 [2024-12-09 05:38:23.229660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:23.229715] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:47:51.066 [2024-12-09 05:38:23.229735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:47:51.066 [2024-12-09 05:38:23.233253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:47:51.066 [2024-12-09 05:38:23.233288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615c60 (9): Bad file descriptor 00:47:51.066 [2024-12-09 05:38:23.264505] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:47:51.066 8477.50 IOPS, 33.12 MiB/s [2024-12-09T05:38:38.038Z] 9067.67 IOPS, 35.42 MiB/s [2024-12-09T05:38:38.038Z] 9342.75 IOPS, 36.50 MiB/s [2024-12-09T05:38:38.038Z] [2024-12-09 05:38:26.848290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.066 [2024-12-09 05:38:26.848369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.066 [2024-12-09 05:38:26.848415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.848631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.848983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.848997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.067 [2024-12-09 05:38:26.849378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.067 [2024-12-09 05:38:26.849436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.067 [2024-12-09 05:38:26.849451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.849882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.849911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.849941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.849957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.849971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.068 [2024-12-09 05:38:26.850680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.068 [2024-12-09 05:38:26.850737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.068 [2024-12-09 05:38:26.850752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.850977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.850992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.069 [2024-12-09 05:38:26.851928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.851958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.069 [2024-12-09 05:38:26.851988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.069 [2024-12-09 05:38:26.852002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.070 [2024-12-09 05:38:26.852040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.070 [2024-12-09 05:38:26.852071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.070 [2024-12-09 05:38:26.852101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.070 [2024-12-09 05:38:26.852130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.070 [2024-12-09 05:38:26.852159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x689370 is same with the state(6) to be set 00:47:51.070 [2024-12-09 05:38:26.852189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110528 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110984 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110992 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111000 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111008 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111016 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111024 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111032 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.070 [2024-12-09 05:38:26.852620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.070 [2024-12-09 05:38:26.852632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111040 len:8 PRP1 0x0 PRP2 0x0 00:47:51.070 [2024-12-09 05:38:26.852646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852695] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:47:51.070 [2024-12-09 05:38:26.852754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:26.852777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:26.852807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:26.852835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:26.852878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:26.852891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:47:51.070 [2024-12-09 05:38:26.852947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615c60 (9): Bad file descriptor 00:47:51.070 [2024-12-09 05:38:26.856509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:47:51.070 [2024-12-09 05:38:26.879016] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:47:51.070 9398.60 IOPS, 36.71 MiB/s [2024-12-09T05:38:38.042Z] 9541.50 IOPS, 37.27 MiB/s [2024-12-09T05:38:38.042Z] 9628.71 IOPS, 37.61 MiB/s [2024-12-09T05:38:38.042Z] 9698.12 IOPS, 37.88 MiB/s [2024-12-09T05:38:38.042Z] 9761.89 IOPS, 38.13 MiB/s [2024-12-09T05:38:38.042Z] [2024-12-09 05:38:31.417539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:31.417612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.417648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:31.417662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.417676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:31.417689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.417703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:51.070 [2024-12-09 05:38:31.417716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.417729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615c60 is same with the state(6) to be set 00:47:51.070 [2024-12-09 05:38:31.418563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.070 [2024-12-09 05:38:31.418825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.070 [2024-12-09 05:38:31.418842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.418856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.418872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.418885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.418901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.418929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.418958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.418972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.418986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.071 [2024-12-09 05:38:31.419567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.419959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.419994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.420010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.420024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.420039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.071 [2024-12-09 05:38:31.420052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.071 [2024-12-09 05:38:31.420067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.420784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.420983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.420997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.072 [2024-12-09 05:38:31.421011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.421025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.421040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.421054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.072 [2024-12-09 05:38:31.421068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.072 [2024-12-09 05:38:31.421089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:47:51.073 [2024-12-09 05:38:31.421476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:51.073 [2024-12-09 05:38:31.421917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6859f0 is same with the state(6) to be set 00:47:51.073 [2024-12-09 05:38:31.421947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.421958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.073 [2024-12-09 05:38:31.421969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:47:51.073 [2024-12-09 05:38:31.421982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.421996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.422006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.073 [2024-12-09 05:38:31.422017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:47:51.073 [2024-12-09 05:38:31.422030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.422043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.422052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.073 [2024-12-09 05:38:31.422062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:47:51.073 [2024-12-09 05:38:31.422076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.422089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.422098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.073 [2024-12-09 05:38:31.422108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:47:51.073 [2024-12-09 05:38:31.422121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.422134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.422144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.073 [2024-12-09 05:38:31.422154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:47:51.073 [2024-12-09 05:38:31.422167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.422180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.422223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.073 [2024-12-09 05:38:31.422235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:47:51.073 [2024-12-09 05:38:31.422249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.073 [2024-12-09 05:38:31.422263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.073 [2024-12-09 05:38:31.422273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:47:51.074 [2024-12-09 05:38:31.422800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:51.074 [2024-12-09 05:38:31.422810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:47:51.074 [2024-12-09 05:38:31.422823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:51.074 [2024-12-09 05:38:31.422872] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:47:51.074 [2024-12-09 05:38:31.422891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:47:51.074 [2024-12-09 05:38:31.426320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:47:51.074 [2024-12-09 05:38:31.426363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615c60 (9): Bad file descriptor 00:47:51.074 [2024-12-09 05:38:31.457403] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:47:51.074 9766.40 IOPS, 38.15 MiB/s [2024-12-09T05:38:38.046Z] 9818.91 IOPS, 38.36 MiB/s [2024-12-09T05:38:38.046Z] 9857.33 IOPS, 38.51 MiB/s [2024-12-09T05:38:38.046Z] 9892.31 IOPS, 38.64 MiB/s [2024-12-09T05:38:38.046Z] 9921.14 IOPS, 38.75 MiB/s [2024-12-09T05:38:38.046Z] 9949.87 IOPS, 38.87 MiB/s 00:47:51.074 Latency(us) 00:47:51.074 [2024-12-09T05:38:38.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:51.074 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:47:51.074 Verification LBA range: start 0x0 length 0x4000 00:47:51.074 NVMe0n1 : 15.01 9950.79 38.87 247.68 0.00 12522.08 603.23 14894.55 00:47:51.074 [2024-12-09T05:38:38.046Z] =================================================================================================================== 00:47:51.074 [2024-12-09T05:38:38.046Z] Total : 9950.79 38.87 247.68 0.00 12522.08 603.23 14894.55 00:47:51.074 Received shutdown signal, test time was about 15.000000 seconds 00:47:51.074 00:47:51.074 Latency(us) 00:47:51.074 [2024-12-09T05:38:38.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:51.074 [2024-12-09T05:38:38.046Z] =================================================================================================================== 00:47:51.074 [2024-12-09T05:38:38.046Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:47:51.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75225 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75225 /var/tmp/bdevperf.sock 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75225 ']' 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:47:51.074 [2024-12-09 05:38:37.857812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:47:51.074 05:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:47:51.334 [2024-12-09 05:38:38.109175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:47:51.334 05:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:47:51.593 NVMe0n1 00:47:51.593 05:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:47:51.852 00:47:51.853 05:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:47:52.112 00:47:52.112 05:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:52.112 05:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:47:52.679 05:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:52.679 05:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:47:55.980 05:38:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:55.980 05:38:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:47:55.980 05:38:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75294 00:47:55.980 05:38:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:55.980 05:38:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75294 00:47:57.353 { 00:47:57.353 "results": [ 00:47:57.353 { 00:47:57.353 "job": "NVMe0n1", 00:47:57.353 "core_mask": "0x1", 00:47:57.353 "workload": "verify", 00:47:57.353 "status": "finished", 00:47:57.353 "verify_range": { 00:47:57.353 "start": 0, 00:47:57.353 "length": 16384 00:47:57.353 }, 00:47:57.353 "queue_depth": 128, 00:47:57.353 "io_size": 4096, 00:47:57.353 "runtime": 1.008891, 00:47:57.353 "iops": 9678.94450441128, 00:47:57.353 "mibps": 37.80837697035656, 00:47:57.353 "io_failed": 0, 00:47:57.353 "io_timeout": 0, 00:47:57.353 "avg_latency_us": 13150.101324396033, 00:47:57.353 "min_latency_us": 1385.1927272727273, 00:47:57.353 "max_latency_us": 15490.327272727272 00:47:57.353 } 00:47:57.353 ], 00:47:57.353 "core_count": 1 00:47:57.353 } 00:47:57.353 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:47:57.353 [2024-12-09 05:38:37.343463] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:57.353 [2024-12-09 05:38:37.344234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75225 ] 00:47:57.353 [2024-12-09 05:38:37.492209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:57.353 [2024-12-09 05:38:37.520983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.353 [2024-12-09 05:38:37.547905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:57.353 [2024-12-09 05:38:39.604808] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:47:57.353 [2024-12-09 05:38:39.605244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:57.353 [2024-12-09 05:38:39.605352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:57.353 [2024-12-09 05:38:39.605436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:57.353 [2024-12-09 05:38:39.605504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:57.353 [2024-12-09 05:38:39.605567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:57.353 [2024-12-09 05:38:39.605679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:57.353 [2024-12-09 05:38:39.605747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:57.353 [2024-12-09 05:38:39.605824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:57.353 [2024-12-09 05:38:39.605887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:47:57.353 [2024-12-09 05:38:39.606015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:47:57.353 [2024-12-09 05:38:39.606102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a4c60 (9): Bad file descriptor 00:47:57.353 [2024-12-09 05:38:39.611694] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:47:57.353 Running I/O for 1 seconds... 00:47:57.353 9637.00 IOPS, 37.64 MiB/s 00:47:57.353 Latency(us) 00:47:57.353 [2024-12-09T05:38:44.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:57.353 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:47:57.353 Verification LBA range: start 0x0 length 0x4000 00:47:57.353 NVMe0n1 : 1.01 9678.94 37.81 0.00 0.00 13150.10 1385.19 15490.33 00:47:57.353 [2024-12-09T05:38:44.325Z] =================================================================================================================== 00:47:57.353 [2024-12-09T05:38:44.325Z] Total : 9678.94 37.81 0.00 0.00 13150.10 1385.19 15490.33 00:47:57.353 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:57.353 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:47:57.612 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:57.612 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:57.612 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:47:58.178 05:38:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:58.178 05:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75225 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75225 ']' 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75225 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75225 00:48:01.461 killing process with pid 75225 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75225' 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75225 00:48:01.461 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75225 00:48:01.720 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:48:01.720 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:01.978 rmmod nvme_tcp 00:48:01.978 rmmod nvme_fabrics 00:48:01.978 rmmod nvme_keyring 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74985 ']' 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74985 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74985 ']' 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74985 00:48:01.978 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:48:02.236 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:02.237 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74985 00:48:02.237 killing process with pid 74985 00:48:02.237 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:02.237 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:02.237 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74985' 00:48:02.237 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74985 00:48:02.237 05:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74985 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:02.237 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:02.495 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:48:02.496 ************************************ 00:48:02.496 END TEST nvmf_failover 00:48:02.496 ************************************ 00:48:02.496 00:48:02.496 real 0m31.226s 00:48:02.496 user 2m0.741s 00:48:02.496 sys 0m5.130s 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:48:02.496 ************************************ 00:48:02.496 START TEST nvmf_host_discovery 00:48:02.496 ************************************ 00:48:02.496 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:48:02.755 * Looking for test storage... 00:48:02.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:02.755 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:02.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:02.755 --rc genhtml_branch_coverage=1 00:48:02.755 --rc genhtml_function_coverage=1 00:48:02.755 --rc genhtml_legend=1 00:48:02.755 --rc geninfo_all_blocks=1 00:48:02.755 --rc geninfo_unexecuted_blocks=1 00:48:02.755 00:48:02.755 ' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:02.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:02.756 --rc genhtml_branch_coverage=1 00:48:02.756 --rc genhtml_function_coverage=1 00:48:02.756 --rc genhtml_legend=1 00:48:02.756 --rc geninfo_all_blocks=1 00:48:02.756 --rc geninfo_unexecuted_blocks=1 00:48:02.756 00:48:02.756 ' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:02.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:02.756 --rc genhtml_branch_coverage=1 00:48:02.756 --rc genhtml_function_coverage=1 00:48:02.756 --rc genhtml_legend=1 00:48:02.756 --rc geninfo_all_blocks=1 00:48:02.756 --rc geninfo_unexecuted_blocks=1 00:48:02.756 00:48:02.756 ' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:02.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:02.756 --rc genhtml_branch_coverage=1 00:48:02.756 --rc genhtml_function_coverage=1 00:48:02.756 --rc genhtml_legend=1 00:48:02.756 --rc geninfo_all_blocks=1 00:48:02.756 --rc geninfo_unexecuted_blocks=1 00:48:02.756 00:48:02.756 ' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:02.756 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:02.756 Cannot find device "nvmf_init_br" 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:02.756 Cannot find device "nvmf_init_br2" 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:48:02.756 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:02.757 Cannot find device "nvmf_tgt_br" 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:02.757 Cannot find device "nvmf_tgt_br2" 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:02.757 Cannot find device "nvmf_init_br" 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:02.757 Cannot find device "nvmf_init_br2" 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:02.757 Cannot find device "nvmf_tgt_br" 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:02.757 Cannot find device "nvmf_tgt_br2" 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:48:02.757 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:03.016 Cannot find device "nvmf_br" 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:03.016 Cannot find device "nvmf_init_if" 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:03.016 Cannot find device "nvmf_init_if2" 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:03.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:03.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:03.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:03.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:48:03.016 00:48:03.016 --- 10.0.0.3 ping statistics --- 00:48:03.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:03.016 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:03.016 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:03.016 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:48:03.016 00:48:03.016 --- 10.0.0.4 ping statistics --- 00:48:03.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:03.016 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:03.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:03.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:48:03.016 00:48:03.016 --- 10.0.0.1 ping statistics --- 00:48:03.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:03.016 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:48:03.016 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:03.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:03.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:48:03.016 00:48:03.016 --- 10.0.0.2 ping statistics --- 00:48:03.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:03.016 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:03.276 05:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75622 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75622 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75622 ']' 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:03.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:03.276 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.276 [2024-12-09 05:38:50.066371] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:03.276 [2024-12-09 05:38:50.066493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:03.276 [2024-12-09 05:38:50.201203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:03.276 [2024-12-09 05:38:50.229427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:03.276 [2024-12-09 05:38:50.229491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:03.276 [2024-12-09 05:38:50.229506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:03.276 [2024-12-09 05:38:50.229516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:03.276 [2024-12-09 05:38:50.229526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:03.276 [2024-12-09 05:38:50.229968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:03.536 [2024-12-09 05:38:50.259084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.536 [2024-12-09 05:38:50.373801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.536 [2024-12-09 05:38:50.381908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.536 null0 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.536 null1 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.536 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75647 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75647 /tmp/host.sock 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75647 ']' 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:03.537 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:03.537 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.537 [2024-12-09 05:38:50.471239] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:03.537 [2024-12-09 05:38:50.471333] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75647 ] 00:48:03.796 [2024-12-09 05:38:50.624732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:03.796 [2024-12-09 05:38:50.663335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:03.796 [2024-12-09 05:38:50.695675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.796 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.056 05:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:04.056 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.315 [2024-12-09 05:38:51.106082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:04.315 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:48:04.573 05:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:48:04.832 [2024-12-09 05:38:51.766820] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:48:04.832 [2024-12-09 05:38:51.766844] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:48:04.832 [2024-12-09 05:38:51.766901] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:04.832 [2024-12-09 05:38:51.772857] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:48:05.090 [2024-12-09 05:38:51.827276] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:48:05.090 [2024-12-09 05:38:51.828180] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15dde60:1 started. 00:48:05.090 [2024-12-09 05:38:51.829768] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:48:05.090 [2024-12-09 05:38:51.829807] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:48:05.090 [2024-12-09 05:38:51.835413] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15dde60 was disconnected and freed. delete nvme_qpair. 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:48:05.659 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.660 [2024-12-09 05:38:52.568580] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15ec2f0:1 started. 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:48:05.660 [2024-12-09 05:38:52.576213] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15ec2f0 was disconnected and freed. delete nvme_qpair. 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:48:05.660 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:48:05.919 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.920 [2024-12-09 05:38:52.679555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:48:05.920 [2024-12-09 05:38:52.680063] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:48:05.920 [2024-12-09 05:38:52.680085] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:48:05.920 [2024-12-09 05:38:52.686073] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.920 [2024-12-09 05:38:52.750448] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:05.920 to 10.0.0.3:4421 00:48:05.920 [2024-12-09 05:38:52.750489] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:48:05.920 [2024-12-09 05:38:52.750511] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:48:05.920 [2024-12-09 05:38:52.750528] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:48:05.920 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.180 [2024-12-09 05:38:52.916489] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:48:06.180 [2024-12-09 05:38:52.916533] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:06.180 [2024-12-09 05:38:52.917413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:06.180 [2024-12-09 05:38:52.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:06.180 [2024-12-09 05:38:52.917466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:06.180 [2024-12-09 05:38:52.917473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:06.180 [2024-12-09 05:38:52.917482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:06.180 [2024-12-09 05:38:52.917489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:06.180 [2024-12-09 05:38:52.917497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:06.180 [2024-12-09 05:38:52.917505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:06.180 [2024-12-09 05:38:52.917512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba240 is same with the state(6) to be set 00:48:06.180 [2024-12-09 05:38:52.922604] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:48:06.180 [2024-12-09 05:38:52.922639] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:48:06.180 [2024-12-09 05:38:52.922700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba240 (9): Bad file descriptor 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.180 05:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.180 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:06.438 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.439 05:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.375 [2024-12-09 05:38:54.337679] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:48:07.375 [2024-12-09 05:38:54.337703] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:48:07.375 [2024-12-09 05:38:54.337735] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:07.375 [2024-12-09 05:38:54.343728] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:48:07.635 [2024-12-09 05:38:54.402058] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:48:07.635 [2024-12-09 05:38:54.402848] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x15ea6d0:1 started. 00:48:07.635 [2024-12-09 05:38:54.404624] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:48:07.635 [2024-12-09 05:38:54.404674] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:07.635 [2024-12-09 05:38:54.406638] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x15ea6d0 was disconnected and freed. delete nvme_qpair. 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.635 request: 00:48:07.635 { 00:48:07.635 "name": "nvme", 00:48:07.635 "trtype": "tcp", 00:48:07.635 "traddr": "10.0.0.3", 00:48:07.635 "adrfam": "ipv4", 00:48:07.635 "trsvcid": "8009", 00:48:07.635 "hostnqn": "nqn.2021-12.io.spdk:test", 00:48:07.635 "wait_for_attach": true, 00:48:07.635 "method": "bdev_nvme_start_discovery", 00:48:07.635 "req_id": 1 00:48:07.635 } 00:48:07.635 Got JSON-RPC error response 00:48:07.635 response: 00:48:07.635 { 00:48:07.635 "code": -17, 00:48:07.635 "message": "File exists" 00:48:07.635 } 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.635 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.635 request: 00:48:07.635 { 00:48:07.635 "name": "nvme_second", 00:48:07.635 "trtype": "tcp", 00:48:07.636 "traddr": "10.0.0.3", 00:48:07.636 "adrfam": "ipv4", 00:48:07.636 "trsvcid": "8009", 00:48:07.636 "hostnqn": "nqn.2021-12.io.spdk:test", 00:48:07.636 "wait_for_attach": true, 00:48:07.636 "method": "bdev_nvme_start_discovery", 00:48:07.636 "req_id": 1 00:48:07.636 } 00:48:07.636 Got JSON-RPC error response 00:48:07.636 response: 00:48:07.636 { 00:48:07.636 "code": -17, 00:48:07.636 "message": "File exists" 00:48:07.636 } 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:48:07.636 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.895 05:38:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:08.831 [2024-12-09 05:38:55.676932] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:48:08.831 [2024-12-09 05:38:55.677007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8fd0 with addr=10.0.0.3, port=8010 00:48:08.831 [2024-12-09 05:38:55.677024] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:48:08.831 [2024-12-09 05:38:55.677033] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:48:08.831 [2024-12-09 05:38:55.677041] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:48:09.768 [2024-12-09 05:38:56.676908] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:48:09.768 [2024-12-09 05:38:56.676977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8fd0 with addr=10.0.0.3, port=8010 00:48:09.768 [2024-12-09 05:38:56.676991] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:48:09.768 [2024-12-09 05:38:56.677000] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:48:09.768 [2024-12-09 05:38:56.677017] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:48:11.146 [2024-12-09 05:38:57.676831] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:48:11.146 request: 00:48:11.146 { 00:48:11.146 "name": "nvme_second", 00:48:11.146 "trtype": "tcp", 00:48:11.146 "traddr": "10.0.0.3", 00:48:11.146 "adrfam": "ipv4", 00:48:11.146 "trsvcid": "8010", 00:48:11.146 "hostnqn": "nqn.2021-12.io.spdk:test", 00:48:11.146 "wait_for_attach": false, 00:48:11.146 "attach_timeout_ms": 3000, 00:48:11.146 "method": "bdev_nvme_start_discovery", 00:48:11.146 "req_id": 1 00:48:11.146 } 00:48:11.146 Got JSON-RPC error response 00:48:11.146 response: 00:48:11.146 { 00:48:11.146 "code": -110, 00:48:11.146 "message": "Connection timed out" 00:48:11.146 } 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75647 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:11.146 rmmod nvme_tcp 00:48:11.146 rmmod nvme_fabrics 00:48:11.146 rmmod nvme_keyring 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75622 ']' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75622 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75622 ']' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75622 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75622 00:48:11.146 killing process with pid 75622 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75622' 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75622 00:48:11.146 05:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75622 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:11.146 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:48:11.405 00:48:11.405 real 0m8.841s 00:48:11.405 user 0m16.904s 00:48:11.405 sys 0m1.846s 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:48:11.405 ************************************ 00:48:11.405 END TEST nvmf_host_discovery 00:48:11.405 ************************************ 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:11.405 05:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:48:11.405 ************************************ 00:48:11.406 START TEST nvmf_host_multipath_status 00:48:11.406 ************************************ 00:48:11.406 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:48:11.665 * Looking for test storage... 00:48:11.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:11.665 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:11.666 --rc genhtml_branch_coverage=1 00:48:11.666 --rc genhtml_function_coverage=1 00:48:11.666 --rc genhtml_legend=1 00:48:11.666 --rc geninfo_all_blocks=1 00:48:11.666 --rc geninfo_unexecuted_blocks=1 00:48:11.666 00:48:11.666 ' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:11.666 --rc genhtml_branch_coverage=1 00:48:11.666 --rc genhtml_function_coverage=1 00:48:11.666 --rc genhtml_legend=1 00:48:11.666 --rc geninfo_all_blocks=1 00:48:11.666 --rc geninfo_unexecuted_blocks=1 00:48:11.666 00:48:11.666 ' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:11.666 --rc genhtml_branch_coverage=1 00:48:11.666 --rc genhtml_function_coverage=1 00:48:11.666 --rc genhtml_legend=1 00:48:11.666 --rc geninfo_all_blocks=1 00:48:11.666 --rc geninfo_unexecuted_blocks=1 00:48:11.666 00:48:11.666 ' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:11.666 --rc genhtml_branch_coverage=1 00:48:11.666 --rc genhtml_function_coverage=1 00:48:11.666 --rc genhtml_legend=1 00:48:11.666 --rc geninfo_all_blocks=1 00:48:11.666 --rc geninfo_unexecuted_blocks=1 00:48:11.666 00:48:11.666 ' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:11.666 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:11.666 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:11.667 Cannot find device "nvmf_init_br" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:11.667 Cannot find device "nvmf_init_br2" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:11.667 Cannot find device "nvmf_tgt_br" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:11.667 Cannot find device "nvmf_tgt_br2" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:11.667 Cannot find device "nvmf_init_br" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:11.667 Cannot find device "nvmf_init_br2" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:11.667 Cannot find device "nvmf_tgt_br" 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:48:11.667 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:11.927 Cannot find device "nvmf_tgt_br2" 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:11.927 Cannot find device "nvmf_br" 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:11.927 Cannot find device "nvmf_init_if" 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:11.927 Cannot find device "nvmf_init_if2" 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:11.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:11.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:11.927 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:11.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:11.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:48:11.927 00:48:11.927 --- 10.0.0.3 ping statistics --- 00:48:11.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:11.928 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:48:11.928 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:11.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:11.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:48:11.928 00:48:11.928 --- 10.0.0.4 ping statistics --- 00:48:11.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:11.928 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:48:11.928 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:12.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:12.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:48:12.187 00:48:12.187 --- 10.0.0.1 ping statistics --- 00:48:12.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:12.187 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:12.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:12.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:48:12.187 00:48:12.187 --- 10.0.0.2 ping statistics --- 00:48:12.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:12.187 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76137 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76137 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76137 ']' 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:12.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:12.187 05:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:48:12.187 [2024-12-09 05:38:58.988341] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:12.187 [2024-12-09 05:38:58.988423] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:12.187 [2024-12-09 05:38:59.137650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:12.447 [2024-12-09 05:38:59.166713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:12.447 [2024-12-09 05:38:59.166774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:12.447 [2024-12-09 05:38:59.166800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:12.447 [2024-12-09 05:38:59.166807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:12.447 [2024-12-09 05:38:59.166814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:12.447 [2024-12-09 05:38:59.167602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:12.447 [2024-12-09 05:38:59.167606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:12.447 [2024-12-09 05:38:59.195080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76137 00:48:12.447 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:48:12.706 [2024-12-09 05:38:59.568918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:12.706 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:48:12.965 Malloc0 00:48:12.965 05:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:48:13.224 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:48:13.505 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:13.784 [2024-12-09 05:39:00.669109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:13.784 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:48:14.042 [2024-12-09 05:39:00.889195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76180 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76180 /var/tmp/bdevperf.sock 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76180 ']' 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:14.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:14.042 05:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:48:14.977 05:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:14.977 05:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:48:14.977 05:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:48:15.236 05:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:48:15.804 Nvme0n1 00:48:15.804 05:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:48:16.063 Nvme0n1 00:48:16.063 05:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:48:16.063 05:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:48:17.968 05:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:48:17.968 05:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:48:18.227 05:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:48:18.485 05:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:19.861 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:20.120 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:20.120 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:20.120 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:20.120 05:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:20.380 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:20.380 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:20.380 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:20.380 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:20.639 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:20.639 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:20.639 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:20.639 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:20.898 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:20.898 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:20.898 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:20.898 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:21.157 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:21.157 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:48:21.157 05:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:48:21.416 05:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:48:21.674 05:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:48:22.611 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:48:22.611 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:48:22.611 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:22.611 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:22.870 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:22.870 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:48:22.871 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:22.871 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:23.130 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:23.130 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:23.130 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:23.130 05:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:23.388 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:23.389 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:23.389 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:23.389 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:23.647 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:23.647 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:23.647 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:23.647 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:23.904 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:23.904 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:23.904 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:23.904 05:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:24.162 05:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:24.162 05:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:48:24.162 05:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:48:24.419 05:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:48:24.678 05:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:48:25.613 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:48:25.613 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:48:25.613 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:25.613 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:25.870 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:25.870 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:48:25.870 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:25.870 05:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:26.128 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:26.128 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:26.128 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:26.128 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:26.385 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:26.385 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:26.385 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:26.386 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:26.644 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:26.644 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:26.644 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:26.644 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:26.901 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:26.901 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:26.901 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:26.901 05:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:27.159 05:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:27.159 05:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:48:27.159 05:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:48:27.417 05:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:48:27.676 05:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:48:28.612 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:48:28.612 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:48:28.871 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:28.871 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:29.130 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:29.130 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:48:29.130 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:29.130 05:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:29.388 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:29.388 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:29.388 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:29.388 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:29.646 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:29.646 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:29.646 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:29.646 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:29.904 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:29.904 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:29.904 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:29.904 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:30.162 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:30.162 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:48:30.162 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:30.162 05:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:30.420 05:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:30.420 05:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:48:30.420 05:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:48:30.678 05:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:48:30.936 05:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:48:31.874 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:48:31.874 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:48:31.874 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:31.874 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:32.133 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:32.133 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:48:32.133 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:32.133 05:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:32.391 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:32.391 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:32.391 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:32.391 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:32.650 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:32.650 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:32.650 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:32.650 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:32.909 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:32.909 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:48:32.909 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:32.909 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:33.168 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:33.168 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:48:33.168 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:33.168 05:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:33.427 05:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:33.427 05:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:48:33.427 05:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:48:33.686 05:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:48:33.945 05:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:48:34.879 05:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:48:34.880 05:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:48:34.880 05:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:34.880 05:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:35.138 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:35.138 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:48:35.138 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:35.138 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:35.396 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:35.396 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:35.396 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:35.396 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:35.654 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:35.654 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:35.654 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:35.654 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:35.912 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:35.912 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:48:35.912 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:35.912 05:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:36.170 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:36.170 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:36.170 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:36.170 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:36.428 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:36.428 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:48:36.685 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:48:36.685 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:48:36.943 05:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:48:37.201 05:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:48:38.135 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:48:38.135 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:48:38.135 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:38.135 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:38.393 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:38.393 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:48:38.393 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:38.393 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:38.652 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:38.652 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:38.652 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:38.652 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:38.922 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:38.922 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:38.922 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:38.922 05:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:39.213 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:39.213 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:39.213 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:39.213 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:39.476 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:39.476 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:39.476 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:39.476 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:39.734 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:39.734 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:48:39.734 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:48:39.992 05:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:48:40.251 05:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:48:41.188 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:48:41.188 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:48:41.188 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:41.188 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:41.447 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:41.447 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:48:41.447 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:41.447 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:41.706 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:41.706 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:41.706 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:41.706 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:41.965 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:41.965 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:41.965 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:41.965 05:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:42.225 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:42.225 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:42.225 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:42.225 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:42.485 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:42.485 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:42.485 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:42.485 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:42.744 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:42.744 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:48:42.744 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:48:43.003 05:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:48:43.285 05:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:48:44.221 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:48:44.221 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:48:44.221 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:44.221 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:44.480 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:44.480 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:48:44.480 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:44.480 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:44.739 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:44.739 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:44.739 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:44.739 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:44.998 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:44.998 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:44.998 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:44.998 05:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:45.257 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:45.257 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:45.257 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:45.257 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:45.516 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:45.516 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:48:45.516 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:45.516 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:45.775 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:45.775 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:48:45.775 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:48:46.035 05:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:48:46.294 05:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:48:47.666 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:48:47.666 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:48:47.666 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:47.667 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:48:47.667 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:47.667 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:48:47.667 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:48:47.667 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:47.925 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:47.925 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:48:47.925 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:47.925 05:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:48:48.228 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:48.228 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:48:48.228 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:48:48.228 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:48.485 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:48.485 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:48:48.485 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:48.485 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:48:48.744 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:48:48.744 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:48:48.744 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:48:48.744 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76180 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76180 ']' 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76180 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76180 00:48:49.003 killing process with pid 76180 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76180' 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76180 00:48:49.003 05:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76180 00:48:49.003 { 00:48:49.003 "results": [ 00:48:49.003 { 00:48:49.003 "job": "Nvme0n1", 00:48:49.003 "core_mask": "0x4", 00:48:49.003 "workload": "verify", 00:48:49.003 "status": "terminated", 00:48:49.003 "verify_range": { 00:48:49.003 "start": 0, 00:48:49.003 "length": 16384 00:48:49.003 }, 00:48:49.003 "queue_depth": 128, 00:48:49.003 "io_size": 4096, 00:48:49.003 "runtime": 32.968426, 00:48:49.003 "iops": 9076.80578987908, 00:48:49.003 "mibps": 35.456272616715154, 00:48:49.003 "io_failed": 0, 00:48:49.003 "io_timeout": 0, 00:48:49.003 "avg_latency_us": 14073.691231256045, 00:48:49.003 "min_latency_us": 294.16727272727275, 00:48:49.003 "max_latency_us": 4026531.84 00:48:49.003 } 00:48:49.003 ], 00:48:49.003 "core_count": 1 00:48:49.003 } 00:48:49.265 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76180 00:48:49.265 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:48:49.265 [2024-12-09 05:39:00.953045] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:49.265 [2024-12-09 05:39:00.953172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76180 ] 00:48:49.265 [2024-12-09 05:39:01.106797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:49.265 [2024-12-09 05:39:01.147213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:49.265 [2024-12-09 05:39:01.181003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:49.265 Running I/O for 90 seconds... 00:48:49.265 7700.00 IOPS, 30.08 MiB/s [2024-12-09T05:39:36.237Z] 7818.50 IOPS, 30.54 MiB/s [2024-12-09T05:39:36.237Z] 7815.00 IOPS, 30.53 MiB/s [2024-12-09T05:39:36.237Z] 7813.25 IOPS, 30.52 MiB/s [2024-12-09T05:39:36.237Z] 7812.00 IOPS, 30.52 MiB/s [2024-12-09T05:39:36.237Z] 7797.17 IOPS, 30.46 MiB/s [2024-12-09T05:39:36.237Z] 7798.57 IOPS, 30.46 MiB/s [2024-12-09T05:39:36.237Z] 7767.88 IOPS, 30.34 MiB/s [2024-12-09T05:39:36.237Z] 7981.44 IOPS, 31.18 MiB/s [2024-12-09T05:39:36.237Z] 8236.90 IOPS, 32.18 MiB/s [2024-12-09T05:39:36.237Z] 8430.64 IOPS, 32.93 MiB/s [2024-12-09T05:39:36.237Z] 8608.92 IOPS, 33.63 MiB/s [2024-12-09T05:39:36.237Z] 8765.54 IOPS, 34.24 MiB/s [2024-12-09T05:39:36.237Z] 8873.71 IOPS, 34.66 MiB/s [2024-12-09T05:39:36.237Z] [2024-12-09 05:39:17.395165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.265 [2024-12-09 05:39:17.395219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:49.265 [2024-12-09 05:39:17.395269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.265 [2024-12-09 05:39:17.395289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.395846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.395879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.395913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.395946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.395996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.396009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.396040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.396084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.396115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.266 [2024-12-09 05:39:17.396147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:48:49.266 [2024-12-09 05:39:17.396721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.266 [2024-12-09 05:39:17.396734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.396970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.396989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.267 [2024-12-09 05:39:17.397564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.267 [2024-12-09 05:39:17.397885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:48:49.267 [2024-12-09 05:39:17.397904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.397918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.397937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.397951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.397970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.397984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.268 [2024-12-09 05:39:17.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.398986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.398999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.268 [2024-12-09 05:39:17.399301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:48:49.268 [2024-12-09 05:39:17.399320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.399547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.399576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:17.400177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:17.400504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:17.400521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:48:49.269 8553.07 IOPS, 33.41 MiB/s [2024-12-09T05:39:36.241Z] 8018.50 IOPS, 31.32 MiB/s [2024-12-09T05:39:36.241Z] 7546.82 IOPS, 29.48 MiB/s [2024-12-09T05:39:36.241Z] 7127.56 IOPS, 27.84 MiB/s [2024-12-09T05:39:36.241Z] 7077.00 IOPS, 27.64 MiB/s [2024-12-09T05:39:36.241Z] 7228.35 IOPS, 28.24 MiB/s [2024-12-09T05:39:36.241Z] 7403.95 IOPS, 28.92 MiB/s [2024-12-09T05:39:36.241Z] 7685.05 IOPS, 30.02 MiB/s [2024-12-09T05:39:36.241Z] 7932.70 IOPS, 30.99 MiB/s [2024-12-09T05:39:36.241Z] 8145.54 IOPS, 31.82 MiB/s [2024-12-09T05:39:36.241Z] 8237.64 IOPS, 32.18 MiB/s [2024-12-09T05:39:36.241Z] 8309.73 IOPS, 32.46 MiB/s [2024-12-09T05:39:36.241Z] 8374.70 IOPS, 32.71 MiB/s [2024-12-09T05:39:36.241Z] 8584.14 IOPS, 33.53 MiB/s [2024-12-09T05:39:36.241Z] 8761.17 IOPS, 34.22 MiB/s [2024-12-09T05:39:36.241Z] 8929.00 IOPS, 34.88 MiB/s [2024-12-09T05:39:36.241Z] [2024-12-09 05:39:33.177552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.177620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.177732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.177767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.177799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.177831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.177864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.177895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.177927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.177958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.177977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.177990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.178022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.178054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.178086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.178128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.178188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.178249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.178284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.269 [2024-12-09 05:39:33.178319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.269 [2024-12-09 05:39:33.178357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:48:49.269 [2024-12-09 05:39:33.178377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.178667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.178958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.178993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.270 [2024-12-09 05:39:33.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.270 [2024-12-09 05:39:33.179542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:48:49.270 [2024-12-09 05:39:33.179561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.179585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.179622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.179656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.179670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.179689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.179702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.179722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.179735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.179754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.179768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.179788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.179802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.180987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.181015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.181055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.181088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.181121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:49.271 [2024-12-09 05:39:33.181199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.181231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.181263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:48:49.271 [2024-12-09 05:39:33.181282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:49.271 [2024-12-09 05:39:33.181295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:48:49.271 9001.03 IOPS, 35.16 MiB/s [2024-12-09T05:39:36.243Z] 9044.50 IOPS, 35.33 MiB/s [2024-12-09T05:39:36.243Z] Received shutdown signal, test time was about 32.969148 seconds 00:48:49.271 00:48:49.271 Latency(us) 00:48:49.271 [2024-12-09T05:39:36.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:49.271 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:48:49.271 Verification LBA range: start 0x0 length 0x4000 00:48:49.271 Nvme0n1 : 32.97 9076.81 35.46 0.00 0.00 14073.69 294.17 4026531.84 00:48:49.271 [2024-12-09T05:39:36.243Z] =================================================================================================================== 00:48:49.271 [2024-12-09T05:39:36.243Z] Total : 9076.81 35.46 0.00 0.00 14073.69 294.17 4026531.84 00:48:49.271 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:49.530 rmmod nvme_tcp 00:48:49.530 rmmod nvme_fabrics 00:48:49.530 rmmod nvme_keyring 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76137 ']' 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76137 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76137 ']' 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76137 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76137 00:48:49.530 killing process with pid 76137 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76137' 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76137 00:48:49.530 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76137 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:49.789 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:48:50.047 00:48:50.047 real 0m38.592s 00:48:50.047 user 2m4.936s 00:48:50.047 sys 0m11.005s 00:48:50.047 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:50.047 ************************************ 00:48:50.047 END TEST nvmf_host_multipath_status 00:48:50.048 05:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:48:50.048 ************************************ 00:48:50.048 05:39:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:48:50.048 05:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:50.048 05:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:50.048 05:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:48:50.048 ************************************ 00:48:50.048 START TEST nvmf_discovery_remove_ifc 00:48:50.048 ************************************ 00:48:50.048 05:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:48:50.307 * Looking for test storage... 00:48:50.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:48:50.307 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:50.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:50.308 --rc genhtml_branch_coverage=1 00:48:50.308 --rc genhtml_function_coverage=1 00:48:50.308 --rc genhtml_legend=1 00:48:50.308 --rc geninfo_all_blocks=1 00:48:50.308 --rc geninfo_unexecuted_blocks=1 00:48:50.308 00:48:50.308 ' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:50.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:50.308 --rc genhtml_branch_coverage=1 00:48:50.308 --rc genhtml_function_coverage=1 00:48:50.308 --rc genhtml_legend=1 00:48:50.308 --rc geninfo_all_blocks=1 00:48:50.308 --rc geninfo_unexecuted_blocks=1 00:48:50.308 00:48:50.308 ' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:50.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:50.308 --rc genhtml_branch_coverage=1 00:48:50.308 --rc genhtml_function_coverage=1 00:48:50.308 --rc genhtml_legend=1 00:48:50.308 --rc geninfo_all_blocks=1 00:48:50.308 --rc geninfo_unexecuted_blocks=1 00:48:50.308 00:48:50.308 ' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:50.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:50.308 --rc genhtml_branch_coverage=1 00:48:50.308 --rc genhtml_function_coverage=1 00:48:50.308 --rc genhtml_legend=1 00:48:50.308 --rc geninfo_all_blocks=1 00:48:50.308 --rc geninfo_unexecuted_blocks=1 00:48:50.308 00:48:50.308 ' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:50.308 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:48:50.308 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:50.309 Cannot find device "nvmf_init_br" 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:50.309 Cannot find device "nvmf_init_br2" 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:50.309 Cannot find device "nvmf_tgt_br" 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:50.309 Cannot find device "nvmf_tgt_br2" 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:50.309 Cannot find device "nvmf_init_br" 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:50.309 Cannot find device "nvmf_init_br2" 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:48:50.309 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:50.309 Cannot find device "nvmf_tgt_br" 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:50.568 Cannot find device "nvmf_tgt_br2" 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:50.568 Cannot find device "nvmf_br" 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:50.568 Cannot find device "nvmf_init_if" 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:50.568 Cannot find device "nvmf_init_if2" 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:50.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:50.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:50.568 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:50.827 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:50.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:50.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:48:50.827 00:48:50.827 --- 10.0.0.3 ping statistics --- 00:48:50.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:50.827 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:48:50.827 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:50.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:50.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:48:50.827 00:48:50.827 --- 10.0.0.4 ping statistics --- 00:48:50.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:50.827 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:48:50.827 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:50.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:50.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:48:50.827 00:48:50.827 --- 10.0.0.1 ping statistics --- 00:48:50.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:50.827 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:48:50.827 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:50.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:50.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:48:50.827 00:48:50.827 --- 10.0.0.2 ping statistics --- 00:48:50.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:50.827 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77019 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77019 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77019 ']' 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:50.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:50.828 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:50.828 [2024-12-09 05:39:37.648663] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:50.828 [2024-12-09 05:39:37.648747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:50.828 [2024-12-09 05:39:37.794495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:51.087 [2024-12-09 05:39:37.822432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:51.087 [2024-12-09 05:39:37.822511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:51.087 [2024-12-09 05:39:37.822536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:51.087 [2024-12-09 05:39:37.822543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:51.087 [2024-12-09 05:39:37.822549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:51.087 [2024-12-09 05:39:37.822888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:51.087 [2024-12-09 05:39:37.850361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:51.087 05:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:51.087 [2024-12-09 05:39:37.956875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:51.087 [2024-12-09 05:39:37.965027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:48:51.087 null0 00:48:51.087 [2024-12-09 05:39:37.996931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77038 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77038 /tmp/host.sock 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77038 ']' 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:51.087 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:51.087 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:51.346 [2024-12-09 05:39:38.078350] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:51.346 [2024-12-09 05:39:38.078441] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77038 ] 00:48:51.346 [2024-12-09 05:39:38.234834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:51.346 [2024-12-09 05:39:38.273159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:51.604 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:51.604 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:48:51.604 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:51.604 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:48:51.604 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:51.605 [2024-12-09 05:39:38.374551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:51.605 05:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:52.541 [2024-12-09 05:39:39.419664] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:48:52.541 [2024-12-09 05:39:39.419687] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:48:52.541 [2024-12-09 05:39:39.419724] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:52.541 [2024-12-09 05:39:39.425702] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:48:52.541 [2024-12-09 05:39:39.480051] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:48:52.541 [2024-12-09 05:39:39.481024] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15be000:1 started. 00:48:52.541 [2024-12-09 05:39:39.482814] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:48:52.541 [2024-12-09 05:39:39.482869] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:48:52.541 [2024-12-09 05:39:39.482910] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:48:52.541 [2024-12-09 05:39:39.482926] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:48:52.541 [2024-12-09 05:39:39.482946] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:52.541 [2024-12-09 05:39:39.488492] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15be000 was disconnected and freed. delete nvme_qpair. 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:52.541 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:52.800 05:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:53.736 05:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:55.114 05:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:56.052 05:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:56.988 05:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:57.923 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:58.181 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:58.181 05:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:48:58.181 [2024-12-09 05:39:44.921466] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:48:58.181 [2024-12-09 05:39:44.921551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:58.181 [2024-12-09 05:39:44.921580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:58.181 [2024-12-09 05:39:44.921617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:58.181 [2024-12-09 05:39:44.921628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:58.181 [2024-12-09 05:39:44.921637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:58.181 [2024-12-09 05:39:44.921646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:58.181 [2024-12-09 05:39:44.921655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:58.181 [2024-12-09 05:39:44.921664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:58.181 [2024-12-09 05:39:44.921673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:48:58.181 [2024-12-09 05:39:44.921682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:58.181 [2024-12-09 05:39:44.921690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159a250 is same with the state(6) to be set 00:48:58.181 [2024-12-09 05:39:44.931463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159a250 (9): Bad file descriptor 00:48:58.181 [2024-12-09 05:39:44.941484] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:48:58.181 [2024-12-09 05:39:44.941519] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:48:58.181 [2024-12-09 05:39:44.941525] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:48:58.181 [2024-12-09 05:39:44.941530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:48:58.181 [2024-12-09 05:39:44.941602] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:48:59.167 05:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:48:59.167 [2024-12-09 05:39:45.989713] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:48:59.167 [2024-12-09 05:39:45.989823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159a250 with addr=10.0.0.3, port=4420 00:48:59.167 [2024-12-09 05:39:45.989855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159a250 is same with the state(6) to be set 00:48:59.167 [2024-12-09 05:39:45.989925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159a250 (9): Bad file descriptor 00:48:59.167 [2024-12-09 05:39:45.990975] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:48:59.167 [2024-12-09 05:39:45.991079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:48:59.167 [2024-12-09 05:39:45.991105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:48:59.167 [2024-12-09 05:39:45.991128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:48:59.167 [2024-12-09 05:39:45.991151] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:48:59.167 [2024-12-09 05:39:45.991172] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:48:59.167 [2024-12-09 05:39:45.991188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:48:59.167 [2024-12-09 05:39:45.991223] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:48:59.167 [2024-12-09 05:39:45.991243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:48:59.167 05:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:59.167 05:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:48:59.167 05:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:49:00.101 [2024-12-09 05:39:46.991342] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:49:00.101 [2024-12-09 05:39:46.991385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:49:00.101 [2024-12-09 05:39:46.991404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:49:00.101 [2024-12-09 05:39:46.991429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:49:00.101 [2024-12-09 05:39:46.991438] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:49:00.102 [2024-12-09 05:39:46.991446] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:49:00.102 [2024-12-09 05:39:46.991451] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:49:00.102 [2024-12-09 05:39:46.991455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:49:00.102 [2024-12-09 05:39:46.991482] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:49:00.102 [2024-12-09 05:39:46.991513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:00.102 [2024-12-09 05:39:46.991526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:00.102 [2024-12-09 05:39:46.991538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:00.102 [2024-12-09 05:39:46.991546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:00.102 [2024-12-09 05:39:46.991555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:00.102 [2024-12-09 05:39:46.991562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:00.102 [2024-12-09 05:39:46.991570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:00.102 [2024-12-09 05:39:46.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:00.102 [2024-12-09 05:39:46.991599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:49:00.102 [2024-12-09 05:39:46.991606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:00.102 [2024-12-09 05:39:46.991614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:49:00.102 [2024-12-09 05:39:46.992081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1525a20 (9): Bad file descriptor 00:49:00.102 [2024-12-09 05:39:46.993092] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:49:00.102 [2024-12-09 05:39:46.993132] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:49:00.102 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:49:00.360 05:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:49:01.294 05:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:49:02.232 [2024-12-09 05:39:49.003823] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:49:02.232 [2024-12-09 05:39:49.003846] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:49:02.232 [2024-12-09 05:39:49.003878] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:49:02.232 [2024-12-09 05:39:49.009857] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:49:02.232 [2024-12-09 05:39:49.064153] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:49:02.232 [2024-12-09 05:39:49.064871] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x15a5d80:1 started. 00:49:02.232 [2024-12-09 05:39:49.066019] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:49:02.232 [2024-12-09 05:39:49.066075] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:49:02.232 [2024-12-09 05:39:49.066113] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:49:02.232 [2024-12-09 05:39:49.066136] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:49:02.232 [2024-12-09 05:39:49.066145] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:49:02.232 [2024-12-09 05:39:49.072384] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x15a5d80 was disconnected and freed. delete nvme_qpair. 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77038 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77038 ']' 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77038 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77038 00:49:02.491 killing process with pid 77038 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77038' 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77038 00:49:02.491 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77038 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:02.750 rmmod nvme_tcp 00:49:02.750 rmmod nvme_fabrics 00:49:02.750 rmmod nvme_keyring 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77019 ']' 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77019 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77019 ']' 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77019 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77019 00:49:02.750 killing process with pid 77019 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77019' 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77019 00:49:02.750 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77019 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:03.010 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:03.269 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:03.269 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:03.269 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:03.269 05:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:49:03.269 00:49:03.269 real 0m13.043s 00:49:03.269 user 0m22.253s 00:49:03.269 sys 0m2.403s 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:49:03.269 ************************************ 00:49:03.269 END TEST nvmf_discovery_remove_ifc 00:49:03.269 ************************************ 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:03.269 ************************************ 00:49:03.269 START TEST nvmf_identify_kernel_target 00:49:03.269 ************************************ 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:49:03.269 * Looking for test storage... 00:49:03.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:49:03.269 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.530 --rc genhtml_branch_coverage=1 00:49:03.530 --rc genhtml_function_coverage=1 00:49:03.530 --rc genhtml_legend=1 00:49:03.530 --rc geninfo_all_blocks=1 00:49:03.530 --rc geninfo_unexecuted_blocks=1 00:49:03.530 00:49:03.530 ' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.530 --rc genhtml_branch_coverage=1 00:49:03.530 --rc genhtml_function_coverage=1 00:49:03.530 --rc genhtml_legend=1 00:49:03.530 --rc geninfo_all_blocks=1 00:49:03.530 --rc geninfo_unexecuted_blocks=1 00:49:03.530 00:49:03.530 ' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.530 --rc genhtml_branch_coverage=1 00:49:03.530 --rc genhtml_function_coverage=1 00:49:03.530 --rc genhtml_legend=1 00:49:03.530 --rc geninfo_all_blocks=1 00:49:03.530 --rc geninfo_unexecuted_blocks=1 00:49:03.530 00:49:03.530 ' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.530 --rc genhtml_branch_coverage=1 00:49:03.530 --rc genhtml_function_coverage=1 00:49:03.530 --rc genhtml_legend=1 00:49:03.530 --rc geninfo_all_blocks=1 00:49:03.530 --rc geninfo_unexecuted_blocks=1 00:49:03.530 00:49:03.530 ' 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:03.530 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:03.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:03.531 Cannot find device "nvmf_init_br" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:03.531 Cannot find device "nvmf_init_br2" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:03.531 Cannot find device "nvmf_tgt_br" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:03.531 Cannot find device "nvmf_tgt_br2" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:03.531 Cannot find device "nvmf_init_br" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:03.531 Cannot find device "nvmf_init_br2" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:03.531 Cannot find device "nvmf_tgt_br" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:03.531 Cannot find device "nvmf_tgt_br2" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:03.531 Cannot find device "nvmf_br" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:03.531 Cannot find device "nvmf_init_if" 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:49:03.531 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:03.531 Cannot find device "nvmf_init_if2" 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:03.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:03.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:03.532 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:03.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:03.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:49:03.791 00:49:03.791 --- 10.0.0.3 ping statistics --- 00:49:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.791 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:03.791 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:03.791 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:49:03.791 00:49:03.791 --- 10.0.0.4 ping statistics --- 00:49:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.791 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:03.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:03.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:49:03.791 00:49:03.791 --- 10.0.0.1 ping statistics --- 00:49:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.791 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:03.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:03.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:49:03.791 00:49:03.791 --- 10.0.0.2 ping statistics --- 00:49:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:03.791 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:03.791 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:49:03.792 05:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:04.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:04.392 Waiting for block devices as requested 00:49:04.392 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:49:04.392 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:49:04.392 No valid GPT data, bailing 00:49:04.392 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:49:04.652 No valid GPT data, bailing 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:49:04.652 No valid GPT data, bailing 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:49:04.652 No valid GPT data, bailing 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:04.652 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:49:04.653 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:49:04.912 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -a 10.0.0.1 -t tcp -s 4420 00:49:04.912 00:49:04.912 Discovery Log Number of Records 2, Generation counter 2 00:49:04.912 =====Discovery Log Entry 0====== 00:49:04.912 trtype: tcp 00:49:04.912 adrfam: ipv4 00:49:04.912 subtype: current discovery subsystem 00:49:04.912 treq: not specified, sq flow control disable supported 00:49:04.912 portid: 1 00:49:04.912 trsvcid: 4420 00:49:04.912 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:04.912 traddr: 10.0.0.1 00:49:04.912 eflags: none 00:49:04.912 sectype: none 00:49:04.912 =====Discovery Log Entry 1====== 00:49:04.912 trtype: tcp 00:49:04.912 adrfam: ipv4 00:49:04.912 subtype: nvme subsystem 00:49:04.912 treq: not specified, sq flow control disable supported 00:49:04.912 portid: 1 00:49:04.912 trsvcid: 4420 00:49:04.912 subnqn: nqn.2016-06.io.spdk:testnqn 00:49:04.912 traddr: 10.0.0.1 00:49:04.912 eflags: none 00:49:04.912 sectype: none 00:49:04.912 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:49:04.912 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:49:04.912 ===================================================== 00:49:04.912 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:49:04.912 ===================================================== 00:49:04.912 Controller Capabilities/Features 00:49:04.912 ================================ 00:49:04.912 Vendor ID: 0000 00:49:04.912 Subsystem Vendor ID: 0000 00:49:04.912 Serial Number: c2ad33a09e14eec483b7 00:49:04.912 Model Number: Linux 00:49:04.912 Firmware Version: 6.8.9-20 00:49:04.912 Recommended Arb Burst: 0 00:49:04.912 IEEE OUI Identifier: 00 00 00 00:49:04.912 Multi-path I/O 00:49:04.912 May have multiple subsystem ports: No 00:49:04.912 May have multiple controllers: No 00:49:04.912 Associated with SR-IOV VF: No 00:49:04.912 Max Data Transfer Size: Unlimited 00:49:04.912 Max Number of Namespaces: 0 00:49:04.912 Max Number of I/O Queues: 1024 00:49:04.912 NVMe Specification Version (VS): 1.3 00:49:04.912 NVMe Specification Version (Identify): 1.3 00:49:04.912 Maximum Queue Entries: 1024 00:49:04.912 Contiguous Queues Required: No 00:49:04.912 Arbitration Mechanisms Supported 00:49:04.912 Weighted Round Robin: Not Supported 00:49:04.912 Vendor Specific: Not Supported 00:49:04.912 Reset Timeout: 7500 ms 00:49:04.912 Doorbell Stride: 4 bytes 00:49:04.912 NVM Subsystem Reset: Not Supported 00:49:04.912 Command Sets Supported 00:49:04.912 NVM Command Set: Supported 00:49:04.912 Boot Partition: Not Supported 00:49:04.912 Memory Page Size Minimum: 4096 bytes 00:49:04.913 Memory Page Size Maximum: 4096 bytes 00:49:04.913 Persistent Memory Region: Not Supported 00:49:04.913 Optional Asynchronous Events Supported 00:49:04.913 Namespace Attribute Notices: Not Supported 00:49:04.913 Firmware Activation Notices: Not Supported 00:49:04.913 ANA Change Notices: Not Supported 00:49:04.913 PLE Aggregate Log Change Notices: Not Supported 00:49:04.913 LBA Status Info Alert Notices: Not Supported 00:49:04.913 EGE Aggregate Log Change Notices: Not Supported 00:49:04.913 Normal NVM Subsystem Shutdown event: Not Supported 00:49:04.913 Zone Descriptor Change Notices: Not Supported 00:49:04.913 Discovery Log Change Notices: Supported 00:49:04.913 Controller Attributes 00:49:04.913 128-bit Host Identifier: Not Supported 00:49:04.913 Non-Operational Permissive Mode: Not Supported 00:49:04.913 NVM Sets: Not Supported 00:49:04.913 Read Recovery Levels: Not Supported 00:49:04.913 Endurance Groups: Not Supported 00:49:04.913 Predictable Latency Mode: Not Supported 00:49:04.913 Traffic Based Keep ALive: Not Supported 00:49:04.913 Namespace Granularity: Not Supported 00:49:04.913 SQ Associations: Not Supported 00:49:04.913 UUID List: Not Supported 00:49:04.913 Multi-Domain Subsystem: Not Supported 00:49:04.913 Fixed Capacity Management: Not Supported 00:49:04.913 Variable Capacity Management: Not Supported 00:49:04.913 Delete Endurance Group: Not Supported 00:49:04.913 Delete NVM Set: Not Supported 00:49:04.913 Extended LBA Formats Supported: Not Supported 00:49:04.913 Flexible Data Placement Supported: Not Supported 00:49:04.913 00:49:04.913 Controller Memory Buffer Support 00:49:04.913 ================================ 00:49:04.913 Supported: No 00:49:04.913 00:49:04.913 Persistent Memory Region Support 00:49:04.913 ================================ 00:49:04.913 Supported: No 00:49:04.913 00:49:04.913 Admin Command Set Attributes 00:49:04.913 ============================ 00:49:04.913 Security Send/Receive: Not Supported 00:49:04.913 Format NVM: Not Supported 00:49:04.913 Firmware Activate/Download: Not Supported 00:49:04.913 Namespace Management: Not Supported 00:49:04.913 Device Self-Test: Not Supported 00:49:04.913 Directives: Not Supported 00:49:04.913 NVMe-MI: Not Supported 00:49:04.913 Virtualization Management: Not Supported 00:49:04.913 Doorbell Buffer Config: Not Supported 00:49:04.913 Get LBA Status Capability: Not Supported 00:49:04.913 Command & Feature Lockdown Capability: Not Supported 00:49:04.913 Abort Command Limit: 1 00:49:04.913 Async Event Request Limit: 1 00:49:04.913 Number of Firmware Slots: N/A 00:49:04.913 Firmware Slot 1 Read-Only: N/A 00:49:05.173 Firmware Activation Without Reset: N/A 00:49:05.173 Multiple Update Detection Support: N/A 00:49:05.173 Firmware Update Granularity: No Information Provided 00:49:05.173 Per-Namespace SMART Log: No 00:49:05.173 Asymmetric Namespace Access Log Page: Not Supported 00:49:05.173 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:49:05.173 Command Effects Log Page: Not Supported 00:49:05.173 Get Log Page Extended Data: Supported 00:49:05.173 Telemetry Log Pages: Not Supported 00:49:05.173 Persistent Event Log Pages: Not Supported 00:49:05.173 Supported Log Pages Log Page: May Support 00:49:05.173 Commands Supported & Effects Log Page: Not Supported 00:49:05.173 Feature Identifiers & Effects Log Page:May Support 00:49:05.173 NVMe-MI Commands & Effects Log Page: May Support 00:49:05.173 Data Area 4 for Telemetry Log: Not Supported 00:49:05.173 Error Log Page Entries Supported: 1 00:49:05.173 Keep Alive: Not Supported 00:49:05.173 00:49:05.173 NVM Command Set Attributes 00:49:05.173 ========================== 00:49:05.173 Submission Queue Entry Size 00:49:05.173 Max: 1 00:49:05.173 Min: 1 00:49:05.173 Completion Queue Entry Size 00:49:05.173 Max: 1 00:49:05.173 Min: 1 00:49:05.173 Number of Namespaces: 0 00:49:05.173 Compare Command: Not Supported 00:49:05.173 Write Uncorrectable Command: Not Supported 00:49:05.173 Dataset Management Command: Not Supported 00:49:05.173 Write Zeroes Command: Not Supported 00:49:05.173 Set Features Save Field: Not Supported 00:49:05.173 Reservations: Not Supported 00:49:05.173 Timestamp: Not Supported 00:49:05.173 Copy: Not Supported 00:49:05.173 Volatile Write Cache: Not Present 00:49:05.173 Atomic Write Unit (Normal): 1 00:49:05.173 Atomic Write Unit (PFail): 1 00:49:05.173 Atomic Compare & Write Unit: 1 00:49:05.173 Fused Compare & Write: Not Supported 00:49:05.173 Scatter-Gather List 00:49:05.173 SGL Command Set: Supported 00:49:05.173 SGL Keyed: Not Supported 00:49:05.173 SGL Bit Bucket Descriptor: Not Supported 00:49:05.173 SGL Metadata Pointer: Not Supported 00:49:05.173 Oversized SGL: Not Supported 00:49:05.173 SGL Metadata Address: Not Supported 00:49:05.173 SGL Offset: Supported 00:49:05.173 Transport SGL Data Block: Not Supported 00:49:05.173 Replay Protected Memory Block: Not Supported 00:49:05.173 00:49:05.173 Firmware Slot Information 00:49:05.173 ========================= 00:49:05.173 Active slot: 0 00:49:05.173 00:49:05.173 00:49:05.173 Error Log 00:49:05.173 ========= 00:49:05.173 00:49:05.173 Active Namespaces 00:49:05.173 ================= 00:49:05.173 Discovery Log Page 00:49:05.173 ================== 00:49:05.173 Generation Counter: 2 00:49:05.173 Number of Records: 2 00:49:05.173 Record Format: 0 00:49:05.173 00:49:05.173 Discovery Log Entry 0 00:49:05.173 ---------------------- 00:49:05.173 Transport Type: 3 (TCP) 00:49:05.173 Address Family: 1 (IPv4) 00:49:05.173 Subsystem Type: 3 (Current Discovery Subsystem) 00:49:05.173 Entry Flags: 00:49:05.173 Duplicate Returned Information: 0 00:49:05.173 Explicit Persistent Connection Support for Discovery: 0 00:49:05.173 Transport Requirements: 00:49:05.173 Secure Channel: Not Specified 00:49:05.173 Port ID: 1 (0x0001) 00:49:05.173 Controller ID: 65535 (0xffff) 00:49:05.173 Admin Max SQ Size: 32 00:49:05.173 Transport Service Identifier: 4420 00:49:05.173 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:49:05.173 Transport Address: 10.0.0.1 00:49:05.173 Discovery Log Entry 1 00:49:05.173 ---------------------- 00:49:05.173 Transport Type: 3 (TCP) 00:49:05.173 Address Family: 1 (IPv4) 00:49:05.173 Subsystem Type: 2 (NVM Subsystem) 00:49:05.173 Entry Flags: 00:49:05.173 Duplicate Returned Information: 0 00:49:05.173 Explicit Persistent Connection Support for Discovery: 0 00:49:05.173 Transport Requirements: 00:49:05.173 Secure Channel: Not Specified 00:49:05.173 Port ID: 1 (0x0001) 00:49:05.173 Controller ID: 65535 (0xffff) 00:49:05.173 Admin Max SQ Size: 32 00:49:05.173 Transport Service Identifier: 4420 00:49:05.173 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:49:05.173 Transport Address: 10.0.0.1 00:49:05.173 05:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:05.173 get_feature(0x01) failed 00:49:05.173 get_feature(0x02) failed 00:49:05.173 get_feature(0x04) failed 00:49:05.173 ===================================================== 00:49:05.173 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:05.173 ===================================================== 00:49:05.173 Controller Capabilities/Features 00:49:05.173 ================================ 00:49:05.173 Vendor ID: 0000 00:49:05.173 Subsystem Vendor ID: 0000 00:49:05.173 Serial Number: 62cd052bfa9e983f85f4 00:49:05.173 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:49:05.173 Firmware Version: 6.8.9-20 00:49:05.173 Recommended Arb Burst: 6 00:49:05.173 IEEE OUI Identifier: 00 00 00 00:49:05.173 Multi-path I/O 00:49:05.173 May have multiple subsystem ports: Yes 00:49:05.173 May have multiple controllers: Yes 00:49:05.173 Associated with SR-IOV VF: No 00:49:05.173 Max Data Transfer Size: Unlimited 00:49:05.173 Max Number of Namespaces: 1024 00:49:05.173 Max Number of I/O Queues: 128 00:49:05.173 NVMe Specification Version (VS): 1.3 00:49:05.173 NVMe Specification Version (Identify): 1.3 00:49:05.173 Maximum Queue Entries: 1024 00:49:05.173 Contiguous Queues Required: No 00:49:05.173 Arbitration Mechanisms Supported 00:49:05.173 Weighted Round Robin: Not Supported 00:49:05.173 Vendor Specific: Not Supported 00:49:05.173 Reset Timeout: 7500 ms 00:49:05.173 Doorbell Stride: 4 bytes 00:49:05.173 NVM Subsystem Reset: Not Supported 00:49:05.173 Command Sets Supported 00:49:05.173 NVM Command Set: Supported 00:49:05.173 Boot Partition: Not Supported 00:49:05.173 Memory Page Size Minimum: 4096 bytes 00:49:05.173 Memory Page Size Maximum: 4096 bytes 00:49:05.173 Persistent Memory Region: Not Supported 00:49:05.173 Optional Asynchronous Events Supported 00:49:05.173 Namespace Attribute Notices: Supported 00:49:05.173 Firmware Activation Notices: Not Supported 00:49:05.173 ANA Change Notices: Supported 00:49:05.173 PLE Aggregate Log Change Notices: Not Supported 00:49:05.173 LBA Status Info Alert Notices: Not Supported 00:49:05.173 EGE Aggregate Log Change Notices: Not Supported 00:49:05.173 Normal NVM Subsystem Shutdown event: Not Supported 00:49:05.173 Zone Descriptor Change Notices: Not Supported 00:49:05.173 Discovery Log Change Notices: Not Supported 00:49:05.173 Controller Attributes 00:49:05.173 128-bit Host Identifier: Supported 00:49:05.173 Non-Operational Permissive Mode: Not Supported 00:49:05.173 NVM Sets: Not Supported 00:49:05.173 Read Recovery Levels: Not Supported 00:49:05.173 Endurance Groups: Not Supported 00:49:05.173 Predictable Latency Mode: Not Supported 00:49:05.173 Traffic Based Keep ALive: Supported 00:49:05.173 Namespace Granularity: Not Supported 00:49:05.173 SQ Associations: Not Supported 00:49:05.173 UUID List: Not Supported 00:49:05.173 Multi-Domain Subsystem: Not Supported 00:49:05.173 Fixed Capacity Management: Not Supported 00:49:05.173 Variable Capacity Management: Not Supported 00:49:05.173 Delete Endurance Group: Not Supported 00:49:05.173 Delete NVM Set: Not Supported 00:49:05.173 Extended LBA Formats Supported: Not Supported 00:49:05.173 Flexible Data Placement Supported: Not Supported 00:49:05.173 00:49:05.173 Controller Memory Buffer Support 00:49:05.174 ================================ 00:49:05.174 Supported: No 00:49:05.174 00:49:05.174 Persistent Memory Region Support 00:49:05.174 ================================ 00:49:05.174 Supported: No 00:49:05.174 00:49:05.174 Admin Command Set Attributes 00:49:05.174 ============================ 00:49:05.174 Security Send/Receive: Not Supported 00:49:05.174 Format NVM: Not Supported 00:49:05.174 Firmware Activate/Download: Not Supported 00:49:05.174 Namespace Management: Not Supported 00:49:05.174 Device Self-Test: Not Supported 00:49:05.174 Directives: Not Supported 00:49:05.174 NVMe-MI: Not Supported 00:49:05.174 Virtualization Management: Not Supported 00:49:05.174 Doorbell Buffer Config: Not Supported 00:49:05.174 Get LBA Status Capability: Not Supported 00:49:05.174 Command & Feature Lockdown Capability: Not Supported 00:49:05.174 Abort Command Limit: 4 00:49:05.174 Async Event Request Limit: 4 00:49:05.174 Number of Firmware Slots: N/A 00:49:05.174 Firmware Slot 1 Read-Only: N/A 00:49:05.174 Firmware Activation Without Reset: N/A 00:49:05.174 Multiple Update Detection Support: N/A 00:49:05.174 Firmware Update Granularity: No Information Provided 00:49:05.174 Per-Namespace SMART Log: Yes 00:49:05.174 Asymmetric Namespace Access Log Page: Supported 00:49:05.174 ANA Transition Time : 10 sec 00:49:05.174 00:49:05.174 Asymmetric Namespace Access Capabilities 00:49:05.174 ANA Optimized State : Supported 00:49:05.174 ANA Non-Optimized State : Supported 00:49:05.174 ANA Inaccessible State : Supported 00:49:05.174 ANA Persistent Loss State : Supported 00:49:05.174 ANA Change State : Supported 00:49:05.174 ANAGRPID is not changed : No 00:49:05.174 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:49:05.174 00:49:05.174 ANA Group Identifier Maximum : 128 00:49:05.174 Number of ANA Group Identifiers : 128 00:49:05.174 Max Number of Allowed Namespaces : 1024 00:49:05.174 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:49:05.174 Command Effects Log Page: Supported 00:49:05.174 Get Log Page Extended Data: Supported 00:49:05.174 Telemetry Log Pages: Not Supported 00:49:05.174 Persistent Event Log Pages: Not Supported 00:49:05.174 Supported Log Pages Log Page: May Support 00:49:05.174 Commands Supported & Effects Log Page: Not Supported 00:49:05.174 Feature Identifiers & Effects Log Page:May Support 00:49:05.174 NVMe-MI Commands & Effects Log Page: May Support 00:49:05.174 Data Area 4 for Telemetry Log: Not Supported 00:49:05.174 Error Log Page Entries Supported: 128 00:49:05.174 Keep Alive: Supported 00:49:05.174 Keep Alive Granularity: 1000 ms 00:49:05.174 00:49:05.174 NVM Command Set Attributes 00:49:05.174 ========================== 00:49:05.174 Submission Queue Entry Size 00:49:05.174 Max: 64 00:49:05.174 Min: 64 00:49:05.174 Completion Queue Entry Size 00:49:05.174 Max: 16 00:49:05.174 Min: 16 00:49:05.174 Number of Namespaces: 1024 00:49:05.174 Compare Command: Not Supported 00:49:05.174 Write Uncorrectable Command: Not Supported 00:49:05.174 Dataset Management Command: Supported 00:49:05.174 Write Zeroes Command: Supported 00:49:05.174 Set Features Save Field: Not Supported 00:49:05.174 Reservations: Not Supported 00:49:05.174 Timestamp: Not Supported 00:49:05.174 Copy: Not Supported 00:49:05.174 Volatile Write Cache: Present 00:49:05.174 Atomic Write Unit (Normal): 1 00:49:05.174 Atomic Write Unit (PFail): 1 00:49:05.174 Atomic Compare & Write Unit: 1 00:49:05.174 Fused Compare & Write: Not Supported 00:49:05.174 Scatter-Gather List 00:49:05.174 SGL Command Set: Supported 00:49:05.174 SGL Keyed: Not Supported 00:49:05.174 SGL Bit Bucket Descriptor: Not Supported 00:49:05.174 SGL Metadata Pointer: Not Supported 00:49:05.174 Oversized SGL: Not Supported 00:49:05.174 SGL Metadata Address: Not Supported 00:49:05.174 SGL Offset: Supported 00:49:05.174 Transport SGL Data Block: Not Supported 00:49:05.174 Replay Protected Memory Block: Not Supported 00:49:05.174 00:49:05.174 Firmware Slot Information 00:49:05.174 ========================= 00:49:05.174 Active slot: 0 00:49:05.174 00:49:05.174 Asymmetric Namespace Access 00:49:05.174 =========================== 00:49:05.174 Change Count : 0 00:49:05.174 Number of ANA Group Descriptors : 1 00:49:05.174 ANA Group Descriptor : 0 00:49:05.174 ANA Group ID : 1 00:49:05.174 Number of NSID Values : 1 00:49:05.174 Change Count : 0 00:49:05.174 ANA State : 1 00:49:05.174 Namespace Identifier : 1 00:49:05.174 00:49:05.174 Commands Supported and Effects 00:49:05.174 ============================== 00:49:05.174 Admin Commands 00:49:05.174 -------------- 00:49:05.174 Get Log Page (02h): Supported 00:49:05.174 Identify (06h): Supported 00:49:05.174 Abort (08h): Supported 00:49:05.174 Set Features (09h): Supported 00:49:05.174 Get Features (0Ah): Supported 00:49:05.174 Asynchronous Event Request (0Ch): Supported 00:49:05.174 Keep Alive (18h): Supported 00:49:05.174 I/O Commands 00:49:05.174 ------------ 00:49:05.174 Flush (00h): Supported 00:49:05.174 Write (01h): Supported LBA-Change 00:49:05.174 Read (02h): Supported 00:49:05.174 Write Zeroes (08h): Supported LBA-Change 00:49:05.174 Dataset Management (09h): Supported 00:49:05.174 00:49:05.174 Error Log 00:49:05.174 ========= 00:49:05.174 Entry: 0 00:49:05.174 Error Count: 0x3 00:49:05.174 Submission Queue Id: 0x0 00:49:05.174 Command Id: 0x5 00:49:05.174 Phase Bit: 0 00:49:05.174 Status Code: 0x2 00:49:05.174 Status Code Type: 0x0 00:49:05.174 Do Not Retry: 1 00:49:05.434 Error Location: 0x28 00:49:05.434 LBA: 0x0 00:49:05.434 Namespace: 0x0 00:49:05.434 Vendor Log Page: 0x0 00:49:05.434 ----------- 00:49:05.434 Entry: 1 00:49:05.434 Error Count: 0x2 00:49:05.434 Submission Queue Id: 0x0 00:49:05.434 Command Id: 0x5 00:49:05.434 Phase Bit: 0 00:49:05.434 Status Code: 0x2 00:49:05.434 Status Code Type: 0x0 00:49:05.434 Do Not Retry: 1 00:49:05.434 Error Location: 0x28 00:49:05.434 LBA: 0x0 00:49:05.434 Namespace: 0x0 00:49:05.434 Vendor Log Page: 0x0 00:49:05.434 ----------- 00:49:05.434 Entry: 2 00:49:05.434 Error Count: 0x1 00:49:05.434 Submission Queue Id: 0x0 00:49:05.434 Command Id: 0x4 00:49:05.434 Phase Bit: 0 00:49:05.434 Status Code: 0x2 00:49:05.434 Status Code Type: 0x0 00:49:05.434 Do Not Retry: 1 00:49:05.434 Error Location: 0x28 00:49:05.434 LBA: 0x0 00:49:05.434 Namespace: 0x0 00:49:05.434 Vendor Log Page: 0x0 00:49:05.434 00:49:05.434 Number of Queues 00:49:05.434 ================ 00:49:05.434 Number of I/O Submission Queues: 128 00:49:05.434 Number of I/O Completion Queues: 128 00:49:05.434 00:49:05.434 ZNS Specific Controller Data 00:49:05.434 ============================ 00:49:05.434 Zone Append Size Limit: 0 00:49:05.434 00:49:05.434 00:49:05.434 Active Namespaces 00:49:05.434 ================= 00:49:05.434 get_feature(0x05) failed 00:49:05.434 Namespace ID:1 00:49:05.434 Command Set Identifier: NVM (00h) 00:49:05.434 Deallocate: Supported 00:49:05.434 Deallocated/Unwritten Error: Not Supported 00:49:05.434 Deallocated Read Value: Unknown 00:49:05.434 Deallocate in Write Zeroes: Not Supported 00:49:05.434 Deallocated Guard Field: 0xFFFF 00:49:05.434 Flush: Supported 00:49:05.434 Reservation: Not Supported 00:49:05.434 Namespace Sharing Capabilities: Multiple Controllers 00:49:05.434 Size (in LBAs): 1310720 (5GiB) 00:49:05.434 Capacity (in LBAs): 1310720 (5GiB) 00:49:05.434 Utilization (in LBAs): 1310720 (5GiB) 00:49:05.434 UUID: 6e5bd8c2-fb28-4333-81dd-c15f6431a790 00:49:05.434 Thin Provisioning: Not Supported 00:49:05.434 Per-NS Atomic Units: Yes 00:49:05.434 Atomic Boundary Size (Normal): 0 00:49:05.434 Atomic Boundary Size (PFail): 0 00:49:05.434 Atomic Boundary Offset: 0 00:49:05.434 NGUID/EUI64 Never Reused: No 00:49:05.434 ANA group ID: 1 00:49:05.434 Namespace Write Protected: No 00:49:05.434 Number of LBA Formats: 1 00:49:05.434 Current LBA Format: LBA Format #00 00:49:05.434 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:49:05.434 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:05.434 rmmod nvme_tcp 00:49:05.434 rmmod nvme_fabrics 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:05.434 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:49:05.694 05:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:06.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:06.632 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:49:06.632 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:49:06.632 00:49:06.632 real 0m3.387s 00:49:06.632 user 0m1.317s 00:49:06.632 sys 0m1.470s 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:49:06.632 ************************************ 00:49:06.632 END TEST nvmf_identify_kernel_target 00:49:06.632 ************************************ 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:06.632 ************************************ 00:49:06.632 START TEST nvmf_auth_host 00:49:06.632 ************************************ 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:49:06.632 * Looking for test storage... 00:49:06.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:49:06.632 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:06.892 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:06.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.892 --rc genhtml_branch_coverage=1 00:49:06.892 --rc genhtml_function_coverage=1 00:49:06.893 --rc genhtml_legend=1 00:49:06.893 --rc geninfo_all_blocks=1 00:49:06.893 --rc geninfo_unexecuted_blocks=1 00:49:06.893 00:49:06.893 ' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:06.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.893 --rc genhtml_branch_coverage=1 00:49:06.893 --rc genhtml_function_coverage=1 00:49:06.893 --rc genhtml_legend=1 00:49:06.893 --rc geninfo_all_blocks=1 00:49:06.893 --rc geninfo_unexecuted_blocks=1 00:49:06.893 00:49:06.893 ' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:06.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.893 --rc genhtml_branch_coverage=1 00:49:06.893 --rc genhtml_function_coverage=1 00:49:06.893 --rc genhtml_legend=1 00:49:06.893 --rc geninfo_all_blocks=1 00:49:06.893 --rc geninfo_unexecuted_blocks=1 00:49:06.893 00:49:06.893 ' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:06.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.893 --rc genhtml_branch_coverage=1 00:49:06.893 --rc genhtml_function_coverage=1 00:49:06.893 --rc genhtml_legend=1 00:49:06.893 --rc geninfo_all_blocks=1 00:49:06.893 --rc geninfo_unexecuted_blocks=1 00:49:06.893 00:49:06.893 ' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:06.893 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:06.893 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:06.894 Cannot find device "nvmf_init_br" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:06.894 Cannot find device "nvmf_init_br2" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:06.894 Cannot find device "nvmf_tgt_br" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:06.894 Cannot find device "nvmf_tgt_br2" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:06.894 Cannot find device "nvmf_init_br" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:06.894 Cannot find device "nvmf_init_br2" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:06.894 Cannot find device "nvmf_tgt_br" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:06.894 Cannot find device "nvmf_tgt_br2" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:06.894 Cannot find device "nvmf_br" 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:49:06.894 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:07.153 Cannot find device "nvmf_init_if" 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:07.153 Cannot find device "nvmf_init_if2" 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:07.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:07.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:07.153 05:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:07.153 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:07.154 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:07.154 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:07.154 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:07.154 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:07.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:07.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:49:07.413 00:49:07.413 --- 10.0.0.3 ping statistics --- 00:49:07.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:07.413 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:07.413 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:07.413 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:49:07.413 00:49:07.413 --- 10.0.0.4 ping statistics --- 00:49:07.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:07.413 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:07.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:07.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:49:07.413 00:49:07.413 --- 10.0.0.1 ping statistics --- 00:49:07.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:07.413 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:07.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:07.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:49:07.413 00:49:07.413 --- 10.0.0.2 ping statistics --- 00:49:07.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:07.413 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78030 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78030 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78030 ']' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:07.413 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06181cd34d430c7bee15656eccacb93f 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9ob 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06181cd34d430c7bee15656eccacb93f 0 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06181cd34d430c7bee15656eccacb93f 0 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06181cd34d430c7bee15656eccacb93f 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:49:07.673 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9ob 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9ob 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9ob 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e212d699619d93f6c76a008ab9c9ecd151a33d367b0b977a8676481608540e3b 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gmj 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e212d699619d93f6c76a008ab9c9ecd151a33d367b0b977a8676481608540e3b 3 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e212d699619d93f6c76a008ab9c9ecd151a33d367b0b977a8676481608540e3b 3 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e212d699619d93f6c76a008ab9c9ecd151a33d367b0b977a8676481608540e3b 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gmj 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gmj 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gmj 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e28057a5f0a193d940a0bea9669b11b0029a3d70ec9b6a7 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5HR 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e28057a5f0a193d940a0bea9669b11b0029a3d70ec9b6a7 0 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e28057a5f0a193d940a0bea9669b11b0029a3d70ec9b6a7 0 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e28057a5f0a193d940a0bea9669b11b0029a3d70ec9b6a7 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5HR 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5HR 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5HR 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=15a2a68a3dd2ad836465c9ea4a11efecb8aac0834d6bfd59 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zI0 00:49:07.933 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 15a2a68a3dd2ad836465c9ea4a11efecb8aac0834d6bfd59 2 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 15a2a68a3dd2ad836465c9ea4a11efecb8aac0834d6bfd59 2 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=15a2a68a3dd2ad836465c9ea4a11efecb8aac0834d6bfd59 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zI0 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zI0 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zI0 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b3d2e71d2d1d45bfe7ae32aeb9811bf4 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.di7 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b3d2e71d2d1d45bfe7ae32aeb9811bf4 1 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b3d2e71d2d1d45bfe7ae32aeb9811bf4 1 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b3d2e71d2d1d45bfe7ae32aeb9811bf4 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:49:07.934 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.di7 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.di7 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.di7 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f435851a9eeca268117cfd6d52853918 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OV5 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f435851a9eeca268117cfd6d52853918 1 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f435851a9eeca268117cfd6d52853918 1 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f435851a9eeca268117cfd6d52853918 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OV5 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OV5 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OV5 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e77ec8235791cfa406a63d7cfb7b63bfb553b691c966e0d7 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PQc 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e77ec8235791cfa406a63d7cfb7b63bfb553b691c966e0d7 2 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e77ec8235791cfa406a63d7cfb7b63bfb553b691c966e0d7 2 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e77ec8235791cfa406a63d7cfb7b63bfb553b691c966e0d7 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:49:08.194 05:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PQc 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PQc 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PQc 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:49:08.194 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ed2eed9d82d4d571c16bf8159d8451c 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JLV 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ed2eed9d82d4d571c16bf8159d8451c 0 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ed2eed9d82d4d571c16bf8159d8451c 0 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ed2eed9d82d4d571c16bf8159d8451c 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JLV 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JLV 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JLV 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=34e97bb495866934ab2db75f9aa4c8c28e2b1ea9019942a40303402d8df08cde 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uFK 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 34e97bb495866934ab2db75f9aa4c8c28e2b1ea9019942a40303402d8df08cde 3 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 34e97bb495866934ab2db75f9aa4c8c28e2b1ea9019942a40303402d8df08cde 3 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=34e97bb495866934ab2db75f9aa4c8c28e2b1ea9019942a40303402d8df08cde 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:49:08.195 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uFK 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uFK 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.uFK 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78030 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78030 ']' 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:08.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:08.454 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9ob 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gmj ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gmj 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5HR 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zI0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zI0 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.di7 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OV5 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OV5 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PQc 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JLV ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JLV 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.uFK 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:49:08.713 05:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:08.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:09.230 Waiting for block devices as requested 00:49:09.230 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:49:09.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:49:09.796 No valid GPT data, bailing 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:49:09.796 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:49:10.055 No valid GPT data, bailing 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:49:10.055 No valid GPT data, bailing 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:49:10.055 No valid GPT data, bailing 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:49:10.055 05:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -a 10.0.0.1 -t tcp -s 4420 00:49:10.055 00:49:10.055 Discovery Log Number of Records 2, Generation counter 2 00:49:10.055 =====Discovery Log Entry 0====== 00:49:10.055 trtype: tcp 00:49:10.055 adrfam: ipv4 00:49:10.055 subtype: current discovery subsystem 00:49:10.055 treq: not specified, sq flow control disable supported 00:49:10.055 portid: 1 00:49:10.055 trsvcid: 4420 00:49:10.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:10.055 traddr: 10.0.0.1 00:49:10.055 eflags: none 00:49:10.055 sectype: none 00:49:10.055 =====Discovery Log Entry 1====== 00:49:10.055 trtype: tcp 00:49:10.055 adrfam: ipv4 00:49:10.055 subtype: nvme subsystem 00:49:10.055 treq: not specified, sq flow control disable supported 00:49:10.055 portid: 1 00:49:10.055 trsvcid: 4420 00:49:10.055 subnqn: nqn.2024-02.io.spdk:cnode0 00:49:10.055 traddr: 10.0.0.1 00:49:10.055 eflags: none 00:49:10.055 sectype: none 00:49:10.055 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.314 nvme0n1 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.314 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:10.572 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.573 nvme0n1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.573 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.878 nvme0n1 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.878 nvme0n1 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.878 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:11.137 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.138 nvme0n1 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.138 05:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.138 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.397 nvme0n1 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:11.397 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:11.656 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:11.656 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:11.656 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:11.656 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.657 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.931 nvme0n1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.931 nvme0n1 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:11.931 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.191 05:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.191 nvme0n1 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:12.191 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.192 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.450 nvme0n1 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.450 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.709 nvme0n1 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:12.709 05:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.277 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.536 nvme0n1 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.536 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.537 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.796 nvme0n1 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:13.796 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.797 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.056 nvme0n1 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:14.056 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.057 05:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.316 nvme0n1 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.316 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.574 nvme0n1 00:49:14.574 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.574 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:14.575 05:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:15.952 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.211 05:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.470 nvme0n1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.470 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.729 nvme0n1 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.729 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.730 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.297 nvme0n1 00:49:17.297 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.297 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:17.297 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:17.297 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.297 05:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.298 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.557 nvme0n1 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:17.557 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.558 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:17.817 nvme0n1 00:49:17.817 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.817 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:17.817 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:17.817 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.817 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.076 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.076 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:18.076 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:18.076 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.076 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.076 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.077 05:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.656 nvme0n1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.656 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.224 nvme0n1 00:49:19.224 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.224 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:19.224 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:19.224 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.224 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.224 05:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.225 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.791 nvme0n1 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.791 05:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.357 nvme0n1 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.357 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.926 nvme0n1 00:49:20.926 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.926 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:20.926 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:20.926 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.926 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.926 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.927 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.185 nvme0n1 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.185 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.186 05:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.186 nvme0n1 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.186 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.444 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.444 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:21.444 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:49:21.444 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:21.444 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:21.444 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.445 nvme0n1 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.445 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 nvme0n1 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 nvme0n1 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:21.760 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:21.761 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.019 nvme0n1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.019 05:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.278 nvme0n1 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.278 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.537 nvme0n1 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:22.537 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.538 nvme0n1 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.538 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:22.796 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.797 nvme0n1 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.797 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.055 nvme0n1 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.055 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:23.056 05:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.056 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.315 nvme0n1 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:23.315 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.574 nvme0n1 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.574 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.833 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.833 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:23.833 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:23.833 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.834 nvme0n1 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.834 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.094 05:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.094 nvme0n1 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.094 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:24.353 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:24.354 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:24.354 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.354 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.613 nvme0n1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.613 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.872 nvme0n1 00:49:24.872 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.872 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:24.872 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:24.872 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.872 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:24.873 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.132 05:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.394 nvme0n1 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:25.394 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:25.395 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.395 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.652 nvme0n1 00:49:25.652 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.652 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:25.652 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:25.652 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.652 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.652 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.911 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.170 nvme0n1 00:49:26.170 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.170 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:26.170 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:26.170 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.170 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.170 05:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.170 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.738 nvme0n1 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:26.738 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.739 05:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.351 nvme0n1 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.351 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.926 nvme0n1 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.926 05:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:28.494 nvme0n1 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:28.494 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:28.495 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.062 nvme0n1 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.062 05:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:29.062 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:29.063 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.322 nvme0n1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.322 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.582 nvme0n1 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.582 nvme0n1 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.582 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.843 nvme0n1 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.843 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.103 nvme0n1 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:30.103 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.104 05:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.104 nvme0n1 00:49:30.104 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.104 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:30.104 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:30.104 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.104 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.104 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.363 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:30.363 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:30.363 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.363 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.363 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.364 nvme0n1 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.364 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:30.624 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.625 nvme0n1 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.625 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.885 nvme0n1 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:30.885 nvme0n1 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:30.885 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.145 05:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.145 nvme0n1 00:49:31.145 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.145 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:31.145 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:31.145 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.145 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.145 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.405 nvme0n1 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.405 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:31.664 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.665 nvme0n1 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.665 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:31.924 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.925 nvme0n1 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:31.925 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.184 05:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.184 nvme0n1 00:49:32.185 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.185 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:32.185 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.185 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.185 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:32.185 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:32.443 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.444 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.702 nvme0n1 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:32.702 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.703 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.961 nvme0n1 00:49:32.961 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.961 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:32.961 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.961 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:32.961 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:32.961 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:33.219 05:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.219 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.477 nvme0n1 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.477 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:33.735 nvme0n1 00:49:33.735 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.735 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:33.735 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:33.735 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.735 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.027 05:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.285 nvme0n1 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYxODFjZDM0ZDQzMGM3YmVlMTU2NTZlY2NhY2I5M2a23tdx: 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: ]] 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxMmQ2OTk2MTlkOTNmNmM3NmEwMDhhYjljOWVjZDE1MWEzM2QzNjdiMGI5NzdhODY3NjQ4MTYwODU0MGUzYiW9Ncw=: 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:34.285 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.286 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.852 nvme0n1 00:49:34.852 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.852 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:34.852 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.852 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.853 05:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:35.420 nvme0n1 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:35.420 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.082 nvme0n1 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc3ZWM4MjM1NzkxY2ZhNDA2YTYzZDdjZmI3YjYzYmZiNTUzYjY5MWM5NjZlMGQ3Tjilbg==: 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmVkMmVlZDlkODJkNGQ1NzFjMTZiZjgxNTlkODQ1MWP+p4Pz: 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.082 05:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.649 nvme0n1 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzRlOTdiYjQ5NTg2NjkzNGFiMmRiNzVmOWFhNGM4YzI4ZTJiMWVhOTAxOTk0MmE0MDMwMzQwMmQ4ZGYwOGNkZU1U43g=: 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:36.649 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:36.650 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:49:36.650 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.650 05:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.217 nvme0n1 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:37.217 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.218 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.477 request: 00:49:37.477 { 00:49:37.477 "name": "nvme0", 00:49:37.477 "trtype": "tcp", 00:49:37.477 "traddr": "10.0.0.1", 00:49:37.477 "adrfam": "ipv4", 00:49:37.477 "trsvcid": "4420", 00:49:37.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:49:37.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:49:37.477 "prchk_reftag": false, 00:49:37.477 "prchk_guard": false, 00:49:37.477 "hdgst": false, 00:49:37.477 "ddgst": false, 00:49:37.477 "allow_unrecognized_csi": false, 00:49:37.477 "method": "bdev_nvme_attach_controller", 00:49:37.477 "req_id": 1 00:49:37.477 } 00:49:37.477 Got JSON-RPC error response 00:49:37.477 response: 00:49:37.477 { 00:49:37.477 "code": -5, 00:49:37.477 "message": "Input/output error" 00:49:37.477 } 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:37.477 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.478 request: 00:49:37.478 { 00:49:37.478 "name": "nvme0", 00:49:37.478 "trtype": "tcp", 00:49:37.478 "traddr": "10.0.0.1", 00:49:37.478 "adrfam": "ipv4", 00:49:37.478 "trsvcid": "4420", 00:49:37.478 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:49:37.478 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:49:37.478 "prchk_reftag": false, 00:49:37.478 "prchk_guard": false, 00:49:37.478 "hdgst": false, 00:49:37.478 "ddgst": false, 00:49:37.478 "dhchap_key": "key2", 00:49:37.478 "allow_unrecognized_csi": false, 00:49:37.478 "method": "bdev_nvme_attach_controller", 00:49:37.478 "req_id": 1 00:49:37.478 } 00:49:37.478 Got JSON-RPC error response 00:49:37.478 response: 00:49:37.478 { 00:49:37.478 "code": -5, 00:49:37.478 "message": "Input/output error" 00:49:37.478 } 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.478 request: 00:49:37.478 { 00:49:37.478 "name": "nvme0", 00:49:37.478 "trtype": "tcp", 00:49:37.478 "traddr": "10.0.0.1", 00:49:37.478 "adrfam": "ipv4", 00:49:37.478 "trsvcid": "4420", 00:49:37.478 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:49:37.478 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:49:37.478 "prchk_reftag": false, 00:49:37.478 "prchk_guard": false, 00:49:37.478 "hdgst": false, 00:49:37.478 "ddgst": false, 00:49:37.478 "dhchap_key": "key1", 00:49:37.478 "dhchap_ctrlr_key": "ckey2", 00:49:37.478 "allow_unrecognized_csi": false, 00:49:37.478 "method": "bdev_nvme_attach_controller", 00:49:37.478 "req_id": 1 00:49:37.478 } 00:49:37.478 Got JSON-RPC error response 00:49:37.478 response: 00:49:37.478 { 00:49:37.478 "code": -5, 00:49:37.478 "message": "Input/output error" 00:49:37.478 } 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.478 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.737 nvme0n1 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.737 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.738 request: 00:49:37.738 { 00:49:37.738 "name": "nvme0", 00:49:37.738 "dhchap_key": "key1", 00:49:37.738 "dhchap_ctrlr_key": "ckey2", 00:49:37.738 "method": "bdev_nvme_set_keys", 00:49:37.738 "req_id": 1 00:49:37.738 } 00:49:37.738 Got JSON-RPC error response 00:49:37.738 response: 00:49:37.738 { 00:49:37.738 "code": -13, 00:49:37.738 "message": "Permission denied" 00:49:37.738 } 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:49:37.738 05:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyODA1N2E1ZjBhMTkzZDk0MGEwYmVhOTY2OWIxMWIwMDI5YTNkNzBlYzliNmE3/RxH9A==: 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: ]] 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTVhMmE2OGEzZGQyYWQ4MzY0NjVjOWVhNGExMWVmZWNiOGFhYzA4MzRkNmJmZDU58wSHOg==: 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:39.113 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:39.114 nvme0n1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkMmU3MWQyZDFkNDViZmU3YWUzMmFlYjk4MTFiZjQZJtsi: 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: ]] 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzNTg1MWE5ZWVjYTI2ODExN2NmZDZkNTI4NTM5MThfwfpr: 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:39.114 request: 00:49:39.114 { 00:49:39.114 "name": "nvme0", 00:49:39.114 "dhchap_key": "key2", 00:49:39.114 "dhchap_ctrlr_key": "ckey1", 00:49:39.114 "method": "bdev_nvme_set_keys", 00:49:39.114 "req_id": 1 00:49:39.114 } 00:49:39.114 Got JSON-RPC error response 00:49:39.114 response: 00:49:39.114 { 00:49:39.114 "code": -13, 00:49:39.114 "message": "Permission denied" 00:49:39.114 } 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:49:39.114 05:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:40.049 05:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:40.049 rmmod nvme_tcp 00:49:40.049 rmmod nvme_fabrics 00:49:40.049 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78030 ']' 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78030 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78030 ']' 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78030 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78030 00:49:40.308 killing process with pid 78030 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78030' 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78030 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78030 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:40.308 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:49:40.566 05:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:41.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:41.502 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:49:41.502 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:49:41.502 05:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9ob /tmp/spdk.key-null.5HR /tmp/spdk.key-sha256.di7 /tmp/spdk.key-sha384.PQc /tmp/spdk.key-sha512.uFK /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:49:41.502 05:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:41.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:41.759 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:49:41.759 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:49:42.019 ************************************ 00:49:42.019 END TEST nvmf_auth_host 00:49:42.019 ************************************ 00:49:42.019 00:49:42.019 real 0m35.257s 00:49:42.019 user 0m32.752s 00:49:42.019 sys 0m3.749s 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:42.019 ************************************ 00:49:42.019 START TEST nvmf_digest 00:49:42.019 ************************************ 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:49:42.019 * Looking for test storage... 00:49:42.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:42.019 05:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:42.279 --rc genhtml_branch_coverage=1 00:49:42.279 --rc genhtml_function_coverage=1 00:49:42.279 --rc genhtml_legend=1 00:49:42.279 --rc geninfo_all_blocks=1 00:49:42.279 --rc geninfo_unexecuted_blocks=1 00:49:42.279 00:49:42.279 ' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:42.279 --rc genhtml_branch_coverage=1 00:49:42.279 --rc genhtml_function_coverage=1 00:49:42.279 --rc genhtml_legend=1 00:49:42.279 --rc geninfo_all_blocks=1 00:49:42.279 --rc geninfo_unexecuted_blocks=1 00:49:42.279 00:49:42.279 ' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:42.279 --rc genhtml_branch_coverage=1 00:49:42.279 --rc genhtml_function_coverage=1 00:49:42.279 --rc genhtml_legend=1 00:49:42.279 --rc geninfo_all_blocks=1 00:49:42.279 --rc geninfo_unexecuted_blocks=1 00:49:42.279 00:49:42.279 ' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:42.279 --rc genhtml_branch_coverage=1 00:49:42.279 --rc genhtml_function_coverage=1 00:49:42.279 --rc genhtml_legend=1 00:49:42.279 --rc geninfo_all_blocks=1 00:49:42.279 --rc geninfo_unexecuted_blocks=1 00:49:42.279 00:49:42.279 ' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:42.279 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:42.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:42.280 Cannot find device "nvmf_init_br" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:42.280 Cannot find device "nvmf_init_br2" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:42.280 Cannot find device "nvmf_tgt_br" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:42.280 Cannot find device "nvmf_tgt_br2" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:42.280 Cannot find device "nvmf_init_br" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:42.280 Cannot find device "nvmf_init_br2" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:42.280 Cannot find device "nvmf_tgt_br" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:42.280 Cannot find device "nvmf_tgt_br2" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:42.280 Cannot find device "nvmf_br" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:42.280 Cannot find device "nvmf_init_if" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:42.280 Cannot find device "nvmf_init_if2" 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:42.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:42.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:42.280 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:42.540 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:42.540 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:49:42.540 00:49:42.540 --- 10.0.0.3 ping statistics --- 00:49:42.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:42.540 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:42.540 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:42.540 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:49:42.540 00:49:42.540 --- 10.0.0.4 ping statistics --- 00:49:42.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:42.540 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:42.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:42.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:49:42.540 00:49:42.540 --- 10.0.0.1 ping statistics --- 00:49:42.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:42.540 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:42.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:42.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:49:42.540 00:49:42.540 --- 10.0.0.2 ping statistics --- 00:49:42.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:42.540 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:49:42.540 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:49:42.541 ************************************ 00:49:42.541 START TEST nvmf_digest_clean 00:49:42.541 ************************************ 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79653 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79653 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79653 ']' 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:42.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:42.541 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:42.799 [2024-12-09 05:40:29.537739] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:42.799 [2024-12-09 05:40:29.538062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:42.799 [2024-12-09 05:40:29.701855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:42.799 [2024-12-09 05:40:29.739722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:42.799 [2024-12-09 05:40:29.739779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:42.799 [2024-12-09 05:40:29.739793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:42.799 [2024-12-09 05:40:29.739803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:42.799 [2024-12-09 05:40:29.739812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:42.799 [2024-12-09 05:40:29.740176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:43.058 [2024-12-09 05:40:29.876066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:43.058 null0 00:49:43.058 [2024-12-09 05:40:29.916362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:43.058 [2024-12-09 05:40:29.940492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79678 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79678 /var/tmp/bperf.sock 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79678 ']' 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:43.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:43.058 05:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:49:43.058 [2024-12-09 05:40:30.004337] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:43.058 [2024-12-09 05:40:30.004423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79678 ] 00:49:43.324 [2024-12-09 05:40:30.162238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:43.324 [2024-12-09 05:40:30.201957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:43.324 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:43.324 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:49:43.324 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:49:43.324 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:49:43.324 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:49:43.891 [2024-12-09 05:40:30.560858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:43.891 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:43.891 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:44.149 nvme0n1 00:49:44.149 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:49:44.149 05:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:44.149 Running I/O for 2 seconds... 00:49:46.458 17399.00 IOPS, 67.96 MiB/s [2024-12-09T05:40:33.430Z] 17526.00 IOPS, 68.46 MiB/s 00:49:46.458 Latency(us) 00:49:46.458 [2024-12-09T05:40:33.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:46.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:49:46.458 nvme0n1 : 2.00 17548.72 68.55 0.00 0.00 7289.26 6732.33 18230.92 00:49:46.458 [2024-12-09T05:40:33.430Z] =================================================================================================================== 00:49:46.458 [2024-12-09T05:40:33.430Z] Total : 17548.72 68.55 0.00 0.00 7289.26 6732.33 18230.92 00:49:46.458 { 00:49:46.458 "results": [ 00:49:46.458 { 00:49:46.458 "job": "nvme0n1", 00:49:46.458 "core_mask": "0x2", 00:49:46.458 "workload": "randread", 00:49:46.458 "status": "finished", 00:49:46.458 "queue_depth": 128, 00:49:46.458 "io_size": 4096, 00:49:46.458 "runtime": 2.004705, 00:49:46.458 "iops": 17548.716644094766, 00:49:46.458 "mibps": 68.54967439099518, 00:49:46.458 "io_failed": 0, 00:49:46.458 "io_timeout": 0, 00:49:46.458 "avg_latency_us": 7289.258450565921, 00:49:46.458 "min_latency_us": 6732.334545454545, 00:49:46.458 "max_latency_us": 18230.923636363637 00:49:46.458 } 00:49:46.458 ], 00:49:46.458 "core_count": 1 00:49:46.458 } 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:49:46.458 | select(.opcode=="crc32c") 00:49:46.458 | "\(.module_name) \(.executed)"' 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79678 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79678 ']' 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79678 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79678 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:46.458 killing process with pid 79678 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:46.458 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79678' 00:49:46.458 Received shutdown signal, test time was about 2.000000 seconds 00:49:46.458 00:49:46.459 Latency(us) 00:49:46.459 [2024-12-09T05:40:33.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:46.459 [2024-12-09T05:40:33.431Z] =================================================================================================================== 00:49:46.459 [2024-12-09T05:40:33.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:46.459 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79678 00:49:46.459 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79678 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79725 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79725 /var/tmp/bperf.sock 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79725 ']' 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:46.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:46.718 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:46.718 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:46.718 Zero copy mechanism will not be used. 00:49:46.718 [2024-12-09 05:40:33.583212] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:46.718 [2024-12-09 05:40:33.583311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79725 ] 00:49:46.977 [2024-12-09 05:40:33.736071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:46.977 [2024-12-09 05:40:33.766125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:46.977 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:46.977 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:49:46.977 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:49:46.977 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:49:46.977 05:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:49:47.236 [2024-12-09 05:40:34.113218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:47.236 05:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:47.236 05:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:47.805 nvme0n1 00:49:47.805 05:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:49:47.805 05:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:47.805 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:47.805 Zero copy mechanism will not be used. 00:49:47.805 Running I/O for 2 seconds... 00:49:49.675 8720.00 IOPS, 1090.00 MiB/s [2024-12-09T05:40:36.647Z] 8744.00 IOPS, 1093.00 MiB/s 00:49:49.675 Latency(us) 00:49:49.675 [2024-12-09T05:40:36.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:49.675 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:49:49.675 nvme0n1 : 2.00 8742.29 1092.79 0.00 0.00 1827.34 1645.85 7477.06 00:49:49.675 [2024-12-09T05:40:36.647Z] =================================================================================================================== 00:49:49.675 [2024-12-09T05:40:36.647Z] Total : 8742.29 1092.79 0.00 0.00 1827.34 1645.85 7477.06 00:49:49.675 { 00:49:49.675 "results": [ 00:49:49.675 { 00:49:49.675 "job": "nvme0n1", 00:49:49.675 "core_mask": "0x2", 00:49:49.675 "workload": "randread", 00:49:49.675 "status": "finished", 00:49:49.675 "queue_depth": 16, 00:49:49.675 "io_size": 131072, 00:49:49.675 "runtime": 2.002222, 00:49:49.675 "iops": 8742.287318788825, 00:49:49.675 "mibps": 1092.7859148486032, 00:49:49.675 "io_failed": 0, 00:49:49.675 "io_timeout": 0, 00:49:49.675 "avg_latency_us": 1827.344337709822, 00:49:49.675 "min_latency_us": 1645.8472727272726, 00:49:49.675 "max_latency_us": 7477.061818181818 00:49:49.675 } 00:49:49.675 ], 00:49:49.675 "core_count": 1 00:49:49.675 } 00:49:49.675 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:49:49.675 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:49:49.675 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:49:49.675 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:49:49.675 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:49:49.675 | select(.opcode=="crc32c") 00:49:49.675 | "\(.module_name) \(.executed)"' 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79725 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79725 ']' 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79725 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79725 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:49.935 killing process with pid 79725 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79725' 00:49:49.935 Received shutdown signal, test time was about 2.000000 seconds 00:49:49.935 00:49:49.935 Latency(us) 00:49:49.935 [2024-12-09T05:40:36.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:49.935 [2024-12-09T05:40:36.907Z] =================================================================================================================== 00:49:49.935 [2024-12-09T05:40:36.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79725 00:49:49.935 05:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79725 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79778 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79778 /var/tmp/bperf.sock 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79778 ']' 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:50.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:50.194 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:50.194 [2024-12-09 05:40:37.092088] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:50.195 [2024-12-09 05:40:37.092200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79778 ] 00:49:50.454 [2024-12-09 05:40:37.235915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:50.454 [2024-12-09 05:40:37.266641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:50.454 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:50.454 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:49:50.454 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:49:50.454 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:49:50.454 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:49:50.713 [2024-12-09 05:40:37.553685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:50.713 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:50.713 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:50.973 nvme0n1 00:49:50.973 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:49:50.973 05:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:51.233 Running I/O for 2 seconds... 00:49:53.108 18797.00 IOPS, 73.43 MiB/s [2024-12-09T05:40:40.080Z] 18923.50 IOPS, 73.92 MiB/s 00:49:53.108 Latency(us) 00:49:53.108 [2024-12-09T05:40:40.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:53.108 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:49:53.108 nvme0n1 : 2.01 18894.99 73.81 0.00 0.00 6768.53 6315.29 16205.27 00:49:53.108 [2024-12-09T05:40:40.080Z] =================================================================================================================== 00:49:53.108 [2024-12-09T05:40:40.080Z] Total : 18894.99 73.81 0.00 0.00 6768.53 6315.29 16205.27 00:49:53.108 { 00:49:53.108 "results": [ 00:49:53.108 { 00:49:53.108 "job": "nvme0n1", 00:49:53.108 "core_mask": "0x2", 00:49:53.108 "workload": "randwrite", 00:49:53.108 "status": "finished", 00:49:53.108 "queue_depth": 128, 00:49:53.108 "io_size": 4096, 00:49:53.108 "runtime": 2.009792, 00:49:53.108 "iops": 18894.990128331687, 00:49:53.108 "mibps": 73.80855518879565, 00:49:53.108 "io_failed": 0, 00:49:53.108 "io_timeout": 0, 00:49:53.108 "avg_latency_us": 6768.534076150578, 00:49:53.108 "min_latency_us": 6315.2872727272725, 00:49:53.108 "max_latency_us": 16205.265454545455 00:49:53.108 } 00:49:53.108 ], 00:49:53.108 "core_count": 1 00:49:53.108 } 00:49:53.108 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:49:53.108 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:49:53.108 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:49:53.108 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:49:53.108 | select(.opcode=="crc32c") 00:49:53.108 | "\(.module_name) \(.executed)"' 00:49:53.108 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79778 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79778 ']' 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79778 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79778 00:49:53.677 killing process with pid 79778 00:49:53.677 Received shutdown signal, test time was about 2.000000 seconds 00:49:53.677 00:49:53.677 Latency(us) 00:49:53.677 [2024-12-09T05:40:40.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:53.677 [2024-12-09T05:40:40.649Z] =================================================================================================================== 00:49:53.677 [2024-12-09T05:40:40.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79778' 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79778 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79778 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79826 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79826 /var/tmp/bperf.sock 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79826 ']' 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:53.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:53.677 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:53.677 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:53.677 Zero copy mechanism will not be used. 00:49:53.677 [2024-12-09 05:40:40.605239] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:53.677 [2024-12-09 05:40:40.605328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79826 ] 00:49:53.936 [2024-12-09 05:40:40.754695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:53.937 [2024-12-09 05:40:40.788694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:53.937 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:53.937 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:49:53.937 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:49:53.937 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:49:53.937 05:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:49:54.196 [2024-12-09 05:40:41.063808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:54.196 05:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:54.196 05:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:54.764 nvme0n1 00:49:54.764 05:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:49:54.764 05:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:54.764 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:54.764 Zero copy mechanism will not be used. 00:49:54.764 Running I/O for 2 seconds... 00:49:56.639 7423.00 IOPS, 927.88 MiB/s [2024-12-09T05:40:43.611Z] 7440.00 IOPS, 930.00 MiB/s 00:49:56.639 Latency(us) 00:49:56.639 [2024-12-09T05:40:43.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:56.639 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:49:56.639 nvme0n1 : 2.00 7437.14 929.64 0.00 0.00 2146.43 1690.53 4944.99 00:49:56.639 [2024-12-09T05:40:43.611Z] =================================================================================================================== 00:49:56.639 [2024-12-09T05:40:43.611Z] Total : 7437.14 929.64 0.00 0.00 2146.43 1690.53 4944.99 00:49:56.639 { 00:49:56.639 "results": [ 00:49:56.639 { 00:49:56.639 "job": "nvme0n1", 00:49:56.639 "core_mask": "0x2", 00:49:56.639 "workload": "randwrite", 00:49:56.639 "status": "finished", 00:49:56.639 "queue_depth": 16, 00:49:56.639 "io_size": 131072, 00:49:56.639 "runtime": 2.003459, 00:49:56.639 "iops": 7437.137470744347, 00:49:56.639 "mibps": 929.6421838430434, 00:49:56.639 "io_failed": 0, 00:49:56.639 "io_timeout": 0, 00:49:56.639 "avg_latency_us": 2146.427954606467, 00:49:56.639 "min_latency_us": 1690.530909090909, 00:49:56.639 "max_latency_us": 4944.989090909091 00:49:56.639 } 00:49:56.639 ], 00:49:56.639 "core_count": 1 00:49:56.639 } 00:49:56.639 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:49:56.639 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:49:56.639 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:49:56.639 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:49:56.639 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:49:56.639 | select(.opcode=="crc32c") 00:49:56.639 | "\(.module_name) \(.executed)"' 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79826 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79826 ']' 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79826 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:56.898 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79826 00:49:57.157 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:57.157 killing process with pid 79826 00:49:57.157 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:57.157 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79826' 00:49:57.157 Received shutdown signal, test time was about 2.000000 seconds 00:49:57.157 00:49:57.157 Latency(us) 00:49:57.157 [2024-12-09T05:40:44.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:57.157 [2024-12-09T05:40:44.129Z] =================================================================================================================== 00:49:57.157 [2024-12-09T05:40:44.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:57.157 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79826 00:49:57.157 05:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79826 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79653 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79653 ']' 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79653 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79653 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:57.157 killing process with pid 79653 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79653' 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79653 00:49:57.157 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79653 00:49:57.416 00:49:57.416 real 0m14.760s 00:49:57.416 user 0m28.865s 00:49:57.416 sys 0m4.113s 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:57.416 ************************************ 00:49:57.416 END TEST nvmf_digest_clean 00:49:57.416 ************************************ 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:49:57.416 ************************************ 00:49:57.416 START TEST nvmf_digest_error 00:49:57.416 ************************************ 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79906 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79906 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79906 ']' 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:57.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:57.416 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:57.416 [2024-12-09 05:40:44.340434] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:57.416 [2024-12-09 05:40:44.340509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:57.675 [2024-12-09 05:40:44.474964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:57.675 [2024-12-09 05:40:44.501794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:57.675 [2024-12-09 05:40:44.501856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:57.675 [2024-12-09 05:40:44.501865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:57.675 [2024-12-09 05:40:44.501872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:57.675 [2024-12-09 05:40:44.501878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:57.675 [2024-12-09 05:40:44.502140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:57.675 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:57.675 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:49:57.675 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:57.675 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:57.675 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:57.676 [2024-12-09 05:40:44.618609] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:57.676 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:57.935 [2024-12-09 05:40:44.657015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:57.935 null0 00:49:57.935 [2024-12-09 05:40:44.690574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:57.935 [2024-12-09 05:40:44.714690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79926 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79926 /var/tmp/bperf.sock 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79926 ']' 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:57.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:57.935 05:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:57.935 [2024-12-09 05:40:44.780337] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:49:57.935 [2024-12-09 05:40:44.780427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79926 ] 00:49:58.193 [2024-12-09 05:40:44.928414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:58.193 [2024-12-09 05:40:44.957087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:58.193 [2024-12-09 05:40:44.985265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:58.193 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:58.193 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:49:58.193 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:58.193 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:58.452 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:49:58.452 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:58.452 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:58.452 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:58.452 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:58.452 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:58.711 nvme0n1 00:49:58.711 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:49:58.711 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:58.711 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:58.711 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:58.711 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:49:58.711 05:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:58.970 Running I/O for 2 seconds... 00:49:58.970 [2024-12-09 05:40:45.791447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.791503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.791516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.806024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.806061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.806089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.820286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.820321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.820349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.834892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.834927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.834955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.849208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.849242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.849271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.863693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.863727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.863755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.879391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.879427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.879456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.897121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.897338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.897355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.913517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.913554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.913628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:58.970 [2024-12-09 05:40:45.928095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:58.970 [2024-12-09 05:40:45.928130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.970 [2024-12-09 05:40:45.928158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:45.943540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:45.943615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:45.943645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:45.957779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:45.957979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:45.957996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:45.972391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:45.972599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:45.972617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:45.986878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:45.986915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:45.986928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.001315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.001348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.001376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.015938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.015973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.016001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.030296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.030502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.030549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.044792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.045007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.045025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.059352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.059386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.059414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.073707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.073894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.088216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.088250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.088278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.102758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.102791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.102818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.116971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.117005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.117032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.229 [2024-12-09 05:40:46.131263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.229 [2024-12-09 05:40:46.131296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.229 [2024-12-09 05:40:46.131324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.230 [2024-12-09 05:40:46.145648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.230 [2024-12-09 05:40:46.145836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.230 [2024-12-09 05:40:46.145853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.230 [2024-12-09 05:40:46.160110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.230 [2024-12-09 05:40:46.160144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.230 [2024-12-09 05:40:46.160172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.230 [2024-12-09 05:40:46.174339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.230 [2024-12-09 05:40:46.174376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.230 [2024-12-09 05:40:46.174388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.230 [2024-12-09 05:40:46.188629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.230 [2024-12-09 05:40:46.188663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.230 [2024-12-09 05:40:46.188692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.203975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.488 [2024-12-09 05:40:46.204009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.488 [2024-12-09 05:40:46.204036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.218352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.488 [2024-12-09 05:40:46.218554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.488 [2024-12-09 05:40:46.218581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.232778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.488 [2024-12-09 05:40:46.232958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.488 [2024-12-09 05:40:46.232990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.247266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.488 [2024-12-09 05:40:46.247301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.488 [2024-12-09 05:40:46.247328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.261551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.488 [2024-12-09 05:40:46.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.488 [2024-12-09 05:40:46.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.275779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.488 [2024-12-09 05:40:46.275813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.488 [2024-12-09 05:40:46.275825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.488 [2024-12-09 05:40:46.289951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.289983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.290011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.304332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.304365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.304393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.318652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.318685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.318712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.332835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.332871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.332882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.347072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.347105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.347132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.361229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.361263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.361291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.375542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.375598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.375610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.389674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.389861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.389879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.404416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.404450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.404478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.418743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.418778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.418790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.432865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.432899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.432927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.489 [2024-12-09 05:40:46.447236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.489 [2024-12-09 05:40:46.447268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.489 [2024-12-09 05:40:46.447298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.462666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.462707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.748 [2024-12-09 05:40:46.462735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.476910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.476945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.748 [2024-12-09 05:40:46.476956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.491156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.491189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.748 [2024-12-09 05:40:46.491217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.505919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.506122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.748 [2024-12-09 05:40:46.506177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.521725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.521761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.748 [2024-12-09 05:40:46.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.538865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.539085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.748 [2024-12-09 05:40:46.539120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.748 [2024-12-09 05:40:46.555402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.748 [2024-12-09 05:40:46.555438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.555466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.570760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.570794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.570822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.585836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.585871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.585883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.601132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.601166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.601194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.616810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.616843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.616871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.632138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.632174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.632203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.647569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.647612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.647640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.663051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.663085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.663113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.678188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.678374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.678393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.692881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.692914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.692942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:59.749 [2024-12-09 05:40:46.707343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:49:59.749 [2024-12-09 05:40:46.707376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:59.749 [2024-12-09 05:40:46.707403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.729271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.729304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.729331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.743931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.743981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.744009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.758283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.758505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 17079.00 IOPS, 66.71 MiB/s [2024-12-09T05:40:46.981Z] [2024-12-09 05:40:46.774301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.774338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.774351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.788460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.788494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.788522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.802929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.802962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.802990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.817296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.817329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.831940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.832122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.832155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.846701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.846738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.846765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.861057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.861092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.861119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.875586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.875619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.875647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.889725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.889912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.889929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.907052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.907088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.907116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.923997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.924180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.924212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.939315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.939350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.939378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.953659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.953856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.953873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.009 [2024-12-09 05:40:46.968241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.009 [2024-12-09 05:40:46.968422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.009 [2024-12-09 05:40:46.968456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:46.983722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:46.983756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:46.983783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:46.998628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:46.998673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:46.998701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.012877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.012911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:47.012938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.027089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.027122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:47.027150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.041169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.041202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:47.041229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.055670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.055704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:47.055716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.069807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.070004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:47.070021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.084351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.084385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.269 [2024-12-09 05:40:47.084413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.269 [2024-12-09 05:40:47.098831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.269 [2024-12-09 05:40:47.098864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.098892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.113052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.113085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.113113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.127324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.127358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.127385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.141628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.141659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.141670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.155875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.155909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.155938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.170102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.170157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.170186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.184318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.184352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.184380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.198673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.198708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.198720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.212777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.212810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.212837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.270 [2024-12-09 05:40:47.226991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.270 [2024-12-09 05:40:47.227024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.270 [2024-12-09 05:40:47.227051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.530 [2024-12-09 05:40:47.241829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.530 [2024-12-09 05:40:47.241863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.530 [2024-12-09 05:40:47.241891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.530 [2024-12-09 05:40:47.256439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.530 [2024-12-09 05:40:47.256472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.530 [2024-12-09 05:40:47.256499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.271063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.271097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.271125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.285309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.285343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.285370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.299677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.299711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.299723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.313801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.313983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.314016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.328380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.328589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.328606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.342830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.343046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.343241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.357784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.358000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.358123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.372689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.372893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.373020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.387259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.387475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.387655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.402433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.402695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.402873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.417335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.417551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.417789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.432173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.432375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.432517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.446929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.447123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.447140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.461528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.461589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.461618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.475767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.475799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.475827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.531 [2024-12-09 05:40:47.490570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.531 [2024-12-09 05:40:47.490776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.531 [2024-12-09 05:40:47.490791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.506244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.506424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.506441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.520816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.521031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.521155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.535488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.535719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.535941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.550339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.550600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.550781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.565358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.565583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.565706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.580134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.580349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.580474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.594984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.595198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.595322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.609774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.610008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.610171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.624504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.624731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.624854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.639477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.639680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.639696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.654074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.654316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.654441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.676330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.676535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.676816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.694271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.694312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.694327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.710686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.710722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.710735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.725910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.726110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.726167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.741452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.741649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.741666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:00.791 [2024-12-09 05:40:47.756996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:00.791 [2024-12-09 05:40:47.757226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:00.791 [2024-12-09 05:40:47.757243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:01.051 17078.50 IOPS, 66.71 MiB/s [2024-12-09T05:40:48.023Z] [2024-12-09 05:40:47.773315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xeb7fb0) 00:50:01.051 [2024-12-09 05:40:47.773351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:01.051 [2024-12-09 05:40:47.773380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:01.051 00:50:01.051 Latency(us) 00:50:01.051 [2024-12-09T05:40:48.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:01.051 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:50:01.051 nvme0n1 : 2.01 17073.27 66.69 0.00 0.00 7491.04 6791.91 29074.15 00:50:01.051 [2024-12-09T05:40:48.023Z] =================================================================================================================== 00:50:01.051 [2024-12-09T05:40:48.023Z] Total : 17073.27 66.69 0.00 0.00 7491.04 6791.91 29074.15 00:50:01.051 { 00:50:01.051 "results": [ 00:50:01.051 { 00:50:01.051 "job": "nvme0n1", 00:50:01.051 "core_mask": "0x2", 00:50:01.051 "workload": "randread", 00:50:01.051 "status": "finished", 00:50:01.051 "queue_depth": 128, 00:50:01.051 "io_size": 4096, 00:50:01.051 "runtime": 2.00811, 00:50:01.051 "iops": 17073.26789867089, 00:50:01.051 "mibps": 66.69245272918316, 00:50:01.051 "io_failed": 0, 00:50:01.051 "io_timeout": 0, 00:50:01.051 "avg_latency_us": 7491.035365054954, 00:50:01.051 "min_latency_us": 6791.912727272727, 00:50:01.051 "max_latency_us": 29074.15272727273 00:50:01.051 } 00:50:01.051 ], 00:50:01.051 "core_count": 1 00:50:01.051 } 00:50:01.051 05:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:50:01.051 05:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:50:01.051 05:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:50:01.051 05:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:50:01.051 | .driver_specific 00:50:01.051 | .nvme_error 00:50:01.051 | .status_code 00:50:01.051 | .command_transient_transport_error' 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 134 > 0 )) 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79926 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79926 ']' 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79926 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79926 00:50:01.310 killing process with pid 79926 00:50:01.310 Received shutdown signal, test time was about 2.000000 seconds 00:50:01.310 00:50:01.310 Latency(us) 00:50:01.310 [2024-12-09T05:40:48.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:01.310 [2024-12-09T05:40:48.282Z] =================================================================================================================== 00:50:01.310 [2024-12-09T05:40:48.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79926' 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79926 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79926 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79979 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79979 /var/tmp/bperf.sock 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79979 ']' 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:01.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:01.310 05:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:01.569 [2024-12-09 05:40:48.321330] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:50:01.569 [2024-12-09 05:40:48.321631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79979 ] 00:50:01.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:50:01.569 Zero copy mechanism will not be used. 00:50:01.569 [2024-12-09 05:40:48.466112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:01.569 [2024-12-09 05:40:48.495636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:01.569 [2024-12-09 05:40:48.524224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:02.506 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:02.506 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:50:02.506 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:50:02.506 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:50:02.765 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:50:02.765 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:02.765 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:02.765 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:02.765 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:50:02.765 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:50:03.024 nvme0n1 00:50:03.024 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:50:03.024 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:03.024 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:03.024 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:03.024 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:50:03.024 05:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:50:03.286 I/O size of 131072 is greater than zero copy threshold (65536). 00:50:03.286 Zero copy mechanism will not be used. 00:50:03.286 Running I/O for 2 seconds... 00:50:03.286 [2024-12-09 05:40:50.016818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.016869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.016885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.021972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.022026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.022056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.027107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.027146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.027176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.031935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.032202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.032221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.036995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.037033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.037062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.041502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.041539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.041568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.046024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.046060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.046088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.050598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.050642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.050654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.055158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.055193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.055221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.059317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.059524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.059679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.063897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.063935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.063964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.068185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.068221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.068250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.072377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.072413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.072442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.076514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.076551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.076589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.286 [2024-12-09 05:40:50.080743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.286 [2024-12-09 05:40:50.080778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.286 [2024-12-09 05:40:50.080807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.084674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.084708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.084735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.088700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.088734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.088762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.092940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.092976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.093004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.096976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.097011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.097039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.101044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.101079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.101107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.105450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.105486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.105514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.109556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.109600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.109627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.113592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.113627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.113654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.117920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.117956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.117985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.122116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.122197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.122211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.126339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.126377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.126390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.130784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.130818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.130846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.134829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.134863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.134891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.138990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.139025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.139054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.143342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.143378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.143406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.147505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.147541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.147569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.151628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.151662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.151690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.155981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.156015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.156043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.160057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.160091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.160119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.164159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.164195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.164223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.168489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.168524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.168552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.172676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.172711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.172739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.176753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.176787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.176815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.180788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.180824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.180836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.184655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.184689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.184716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.188572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.188616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.188644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.287 [2024-12-09 05:40:50.192455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.287 [2024-12-09 05:40:50.192489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.287 [2024-12-09 05:40:50.192517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.196436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.196470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.196498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.200509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.200542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.200570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.204504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.204538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.204567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.208513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.208547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.208584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.212428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.212461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.212489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.216417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.216450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.216478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.220428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.220463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.220491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.224412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.224446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.224473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.228347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.228381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.228408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.232353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.232387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.232415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.236255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.236288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.236315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.240177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.240212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.240239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.244174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.244235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.248126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.248160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.248188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.288 [2024-12-09 05:40:50.252325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.288 [2024-12-09 05:40:50.252361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.288 [2024-12-09 05:40:50.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.549 [2024-12-09 05:40:50.256608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.549 [2024-12-09 05:40:50.256642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.549 [2024-12-09 05:40:50.256669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.549 [2024-12-09 05:40:50.260575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.549 [2024-12-09 05:40:50.260654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.549 [2024-12-09 05:40:50.260668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.549 [2024-12-09 05:40:50.264598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.549 [2024-12-09 05:40:50.264631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.549 [2024-12-09 05:40:50.264658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.549 [2024-12-09 05:40:50.268543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.549 [2024-12-09 05:40:50.268621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.549 [2024-12-09 05:40:50.268635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.549 [2024-12-09 05:40:50.272553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.549 [2024-12-09 05:40:50.272611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.549 [2024-12-09 05:40:50.272624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.549 [2024-12-09 05:40:50.276436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.549 [2024-12-09 05:40:50.276470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.549 [2024-12-09 05:40:50.276497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.280440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.280473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.280500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.284481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.284515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.284542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.288431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.288466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.288493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.292458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.292492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.292519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.296476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.296510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.296538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.300433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.300466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.300494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.304456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.304490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.304519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.308477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.308511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.308539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.312462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.312496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.312524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.316443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.316477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.316505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.320448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.320482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.324375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.324409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.324437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.328426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.328461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.328489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.332502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.332535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.332563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.336434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.336467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.336494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.340454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.340489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.340516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.344415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.344448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.344476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.348439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.348474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.348501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.352409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.352443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.352470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.356314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.356349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.356376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.360256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.360290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.360317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.364298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.364332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.364360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.368233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.368267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.368294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.372273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.372307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.372335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.376302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.376336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.376364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.380315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.380349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.380377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.550 [2024-12-09 05:40:50.384341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.550 [2024-12-09 05:40:50.384374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.550 [2024-12-09 05:40:50.384402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.388317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.388350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.388377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.392357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.392391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.392419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.396342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.396375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.396403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.400378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.400412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.400440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.404363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.404412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.404440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.408449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.412448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.412481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.412509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.416438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.416473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.416500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.420534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.420612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.420626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.424584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.424617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.424644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.428476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.428509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.428537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.432428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.432462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.432489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.436476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.436510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.436538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.440325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.440359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.440387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.444297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.444331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.444358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.448322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.448355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.448383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.452337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.452372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.452399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.456356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.456389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.456417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.460360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.460393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.460421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.464248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.464281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.464309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.468270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.468304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.468332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.472289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.472322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.472350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.476260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.476294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.476322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.480301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.480335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.480363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.484279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.484313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.484341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.488228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.488263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.551 [2024-12-09 05:40:50.488290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.551 [2024-12-09 05:40:50.492313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.551 [2024-12-09 05:40:50.492347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.492375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.552 [2024-12-09 05:40:50.496451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.552 [2024-12-09 05:40:50.496486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.496514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.552 [2024-12-09 05:40:50.500403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.552 [2024-12-09 05:40:50.500437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.500465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.552 [2024-12-09 05:40:50.504356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.552 [2024-12-09 05:40:50.504391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.504419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.552 [2024-12-09 05:40:50.508335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.552 [2024-12-09 05:40:50.508370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.508398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.552 [2024-12-09 05:40:50.512269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.552 [2024-12-09 05:40:50.512304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.512332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.552 [2024-12-09 05:40:50.516495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.552 [2024-12-09 05:40:50.516530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.552 [2024-12-09 05:40:50.516572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.520631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.520664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.520692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.524696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.524731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.524775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.528674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.528708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.528736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.532544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.532621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.532635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.536484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.536518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.536545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.540404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.540439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.540467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.544353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.544386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.544414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.548359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.548393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.548421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.552287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.552321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.552348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.556287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.556321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.556348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.560267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.560301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.560329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.564204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.564238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.813 [2024-12-09 05:40:50.564265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.813 [2024-12-09 05:40:50.568203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.813 [2024-12-09 05:40:50.568238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.568266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.572230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.572264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.572291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.576184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.576218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.576245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.580150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.580183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.580211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.584080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.584113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.584141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.588051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.588085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.588113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.591912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.591948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.591959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.595852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.595889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.595900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.599795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.599831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.599843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.603658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.603694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.603706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.607525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.607588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.607618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.611487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.611521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.611549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.615402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.615437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.615464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.619347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.619382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.619410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.623308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.623341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.623369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.627277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.627311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.627338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.631219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.631253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.631280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.635166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.635201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.635228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.639083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.639117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.639145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.643050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.643083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.643111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.646929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.646962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.646990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.650915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.650948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.654981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.655030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.655058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.659010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.659044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.659071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.662955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.662990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.663017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.666901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.814 [2024-12-09 05:40:50.666935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.814 [2024-12-09 05:40:50.666962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.814 [2024-12-09 05:40:50.670849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.670884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.670914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.674792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.674824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.674851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.678753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.678786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.678814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.682699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.682732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.682759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.686649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.686682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.686710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.690589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.690814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.690832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.694704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.694737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.694764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.698656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.698688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.698716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.702491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.702714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.702730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.706721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.706755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.706779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.710571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.710764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.710796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.714605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.714811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.714828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.718667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.718699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.718726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.722440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.722676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.722692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.726670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.726703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.726731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.730605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.730810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.730827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.815 [2024-12-09 05:40:50.734678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.815 [2024-12-09 05:40:50.734712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.815 [2024-12-09 05:40:50.734740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.738558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.738767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.738784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.742656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.742689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.746533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.746740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.746756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.750818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.750853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.750880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.754741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.754774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.754803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.758669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.758701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.758729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.762497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.762706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.762723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.766494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.766719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.766735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.770703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.770737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.770761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.774526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.774719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.774735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:03.816 [2024-12-09 05:40:50.778928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:03.816 [2024-12-09 05:40:50.778964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:03.816 [2024-12-09 05:40:50.778993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.783121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.783155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.783182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.787087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.787120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.787148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.791249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.791283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.791311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.795226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.795260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.795287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.799273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.799307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.799334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.803266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.803299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.803327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.807302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.807336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.807363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.811305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.811340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.811369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.815260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.815294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.815321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.819236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.819270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.819294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.823978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.824013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.824041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.827967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.828000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.828027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.832161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.832196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.832225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.836171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.836205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.836233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.840189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.840223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.840250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.844146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.844180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.844208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.848183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.848217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.848244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.852084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.852118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.852146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.856013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.856074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.859965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.859999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.860027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.863923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.863956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.863984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.867883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.867917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.867945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.871984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.872019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.872047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.078 [2024-12-09 05:40:50.875888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.078 [2024-12-09 05:40:50.875923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.078 [2024-12-09 05:40:50.875950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.879766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.879826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.883665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.883698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.883725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.887692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.887726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.887754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.891763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.891812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.891825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.895777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.895812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.895840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.899814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.899851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.899863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.903725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.903759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.903787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.907888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.907952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.912011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.912045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.912073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.916160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.916194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.916222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.920274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.920309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.920338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.924266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.924299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.924327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.928151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.928185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.928212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.932080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.932114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.932141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.935939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.935972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.936000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.939879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.939912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.939940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.943707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.943739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.943766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.947614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.947648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.947676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.951513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.951547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.951588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.955502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.955535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.955563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.959386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.959419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.959446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.963339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.963374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.963403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.967328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.967361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.967388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.971269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.971303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.971330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.975249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.975283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.975311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.979245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.979279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.079 [2024-12-09 05:40:50.979307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.079 [2024-12-09 05:40:50.983393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.079 [2024-12-09 05:40:50.983431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:50.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:50.987661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:50.987695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:50.987724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:50.992080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:50.992116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:50.992144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:50.996494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:50.996530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:50.996558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.001563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.001653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.001685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.006416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.006692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.006711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.080 7579.00 IOPS, 947.38 MiB/s [2024-12-09T05:40:51.052Z] [2024-12-09 05:40:51.012727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.012766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.012780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.016985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.017019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.017046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.021295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.021328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.021356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.025683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.025719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.025731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.029950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.030000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.030028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.033953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.034002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.034029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.037941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.037990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.038018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.080 [2024-12-09 05:40:51.041925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.080 [2024-12-09 05:40:51.041961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.080 [2024-12-09 05:40:51.041974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.046208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.046246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.046259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.050351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.050390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.050404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.054746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.054779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.054807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.058565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.058641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.058654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.062478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.062557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.062583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.066372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.066409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.066422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.070285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.070323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.070335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.074271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.074307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.074319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.078094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.078127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.078194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.081978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.082011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.082038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.341 [2024-12-09 05:40:51.085825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.341 [2024-12-09 05:40:51.085858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.341 [2024-12-09 05:40:51.085870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.089653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.089684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.089695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.093425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.093636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.093653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.097594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.097625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.097636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.101638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.101670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.101682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.105422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.105628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.105645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.109478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.109675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.109692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.113469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.113499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.113527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.117425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.117654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.117778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.121755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.121962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.122095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.125943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.126193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.126378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.130606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.130834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.131030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.135261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.135459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.135631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.139733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.139939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.140063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.144062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.144263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.144406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.148530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.148745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.148869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.152862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.153052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.153199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.157283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.157443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.157475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.161439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.161474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.161502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.165370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.165404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.165431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.169589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.169634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.169645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.173776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.173813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.173842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.178305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.178347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.178361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.182979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.183015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.183043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.187611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.187692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.342 [2024-12-09 05:40:51.187707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.342 [2024-12-09 05:40:51.192247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.342 [2024-12-09 05:40:51.192448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.192465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.197068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.197103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.197131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.201413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.201449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.201477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.205778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.205814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.205841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.210110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.210184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.210197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.214232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.214269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.214283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.218266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.218302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.218315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.222296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.222333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.222345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.226526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.226586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.226599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.230565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.230641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.234663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.234697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.234724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.238654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.238688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.238716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.242836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.242870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.242898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.246798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.246832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.246860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.250800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.250834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.250862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.254971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.255039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.255067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.259052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.259088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.259116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.263141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.263176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.263204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.267234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.267269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.267297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.271525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.271585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.271598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.275456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.275491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.275518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.279540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.279598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.279611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.283531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.283595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.283639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.287792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.287826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.287854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.291846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.291881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.291909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.295890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.295924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.295953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.299904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.299938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.343 [2024-12-09 05:40:51.299966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.343 [2024-12-09 05:40:51.304091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.343 [2024-12-09 05:40:51.304126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.344 [2024-12-09 05:40:51.304154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.344 [2024-12-09 05:40:51.308543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.344 [2024-12-09 05:40:51.308571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.344 [2024-12-09 05:40:51.308597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.604 [2024-12-09 05:40:51.312733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.604 [2024-12-09 05:40:51.312766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.604 [2024-12-09 05:40:51.312794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.604 [2024-12-09 05:40:51.317183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.604 [2024-12-09 05:40:51.317219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.604 [2024-12-09 05:40:51.317247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.604 [2024-12-09 05:40:51.321269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.604 [2024-12-09 05:40:51.321304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.604 [2024-12-09 05:40:51.321332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.604 [2024-12-09 05:40:51.325466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.604 [2024-12-09 05:40:51.325501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.604 [2024-12-09 05:40:51.325530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.604 [2024-12-09 05:40:51.329557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.604 [2024-12-09 05:40:51.329603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.604 [2024-12-09 05:40:51.329631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.604 [2024-12-09 05:40:51.333873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.604 [2024-12-09 05:40:51.333908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.333936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.337841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.338043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.338060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.342117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.342195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.342208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.346232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.346269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.346281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.350644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.350691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.350719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.354790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.354823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.354851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.358827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.358860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.358873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.362696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.362728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.362756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.366591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.366651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.366680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.370494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.370543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.370572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.374396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.374434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.374446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.378383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.378419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.382273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.382309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.382321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.386161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.386213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.386226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.390218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.390255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.390268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.394159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.394211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.394224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.398157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.398208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.398220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.402283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.402320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.402332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.406412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.406449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.410391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.410428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.410440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.414474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.414541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.414569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.418423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.418474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.418486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.422327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.422362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.422375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.426251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.426288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.426301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.430247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.430283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.605 [2024-12-09 05:40:51.430295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.605 [2024-12-09 05:40:51.434107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.605 [2024-12-09 05:40:51.434179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.434192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.437954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.438002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.438029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.441878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.441913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.441925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.445734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.445769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.445780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.449532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.449746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.449762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.453530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.453729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.453801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.457719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.457917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.458040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.462185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.462372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.462581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.466664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.466870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.466994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.471132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.471349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.471475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.475598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.475815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.475940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.480091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.480302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.480428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.484437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.484669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.484855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.488922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.489127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.489251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.493246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.493436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.493585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.497549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.497730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.497763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.501635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.501669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.501697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.505466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.505501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.505528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.509395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.509429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.509456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.513396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.513431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.513459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.517458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.517494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.517522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.521518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.521555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.521601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.526051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.526089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.526117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.530137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.530204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.530217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.534066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.534100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.606 [2024-12-09 05:40:51.534127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.606 [2024-12-09 05:40:51.537995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.606 [2024-12-09 05:40:51.538028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.538056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.541991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.542024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.542052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.545889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.545923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.545950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.549890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.549923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.549951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.553724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.553757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.553784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.557613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.557647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.557675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.561447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.561480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.561508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.565456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.565520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.607 [2024-12-09 05:40:51.569466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.607 [2024-12-09 05:40:51.569501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.607 [2024-12-09 05:40:51.569544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.867 [2024-12-09 05:40:51.573625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.867 [2024-12-09 05:40:51.573701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.867 [2024-12-09 05:40:51.573714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.867 [2024-12-09 05:40:51.577525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.867 [2024-12-09 05:40:51.577587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.867 [2024-12-09 05:40:51.577617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.867 [2024-12-09 05:40:51.581742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.867 [2024-12-09 05:40:51.581774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.867 [2024-12-09 05:40:51.581802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.867 [2024-12-09 05:40:51.585653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.867 [2024-12-09 05:40:51.585686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.867 [2024-12-09 05:40:51.585713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.867 [2024-12-09 05:40:51.589443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.867 [2024-12-09 05:40:51.589477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.867 [2024-12-09 05:40:51.589504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.867 [2024-12-09 05:40:51.593459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.867 [2024-12-09 05:40:51.593493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.867 [2024-12-09 05:40:51.593521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.597460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.597494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.597520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.601449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.601482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.601510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.605425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.605459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.605487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.609372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.609406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.609433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.613291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.613325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.613353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.617222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.617255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.617283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.621184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.621217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.621245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.625147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.625181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.625210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.629085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.629119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.629146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.632946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.632995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.633023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.636874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.636908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.636921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.640736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.640771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.640782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.644510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.644724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.644740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.648449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.648480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.648508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.652543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.652789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.652927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.657052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.657248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.657389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.661471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.661689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.661812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.665780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.665986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.666118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.670110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.670324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.670458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.674429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.674689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.674813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.678758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.678962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.679087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.683157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.683346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.683485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.687624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.687833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.692058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.692260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.868 [2024-12-09 05:40:51.692408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.868 [2024-12-09 05:40:51.696410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.868 [2024-12-09 05:40:51.696445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.696473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.700445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.700480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.700507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.704414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.704448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.704476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.708413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.708448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.708475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.712403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.712437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.712466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.716468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.716502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.716529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.720467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.720502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.720526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.724497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.724531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.724559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.728439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.728474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.728502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.732508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.732542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.732569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.736507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.736541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.736569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.740454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.740488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.740515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.744418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.744452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.744479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.748405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.748439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.748467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.752393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.752427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.752454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.756414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.756449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.756476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.760346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.760380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.760407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.764369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.764404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.764431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.768403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.768437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.768465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.772364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.772398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.772426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.776294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.776328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.776356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.780325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.780360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.780388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.784245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.784279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.784306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.788199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.788234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.792199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.792259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.796095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.796129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.869 [2024-12-09 05:40:51.796157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.869 [2024-12-09 05:40:51.800140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.869 [2024-12-09 05:40:51.800175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.800202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.804121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.804156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.804183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.808105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.808138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.808167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.812007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.812041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.812069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.816017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.816051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.816079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.819928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.819961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.823929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.823962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.823992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.827875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.827922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.827935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:04.870 [2024-12-09 05:40:51.832003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:04.870 [2024-12-09 05:40:51.832040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:04.870 [2024-12-09 05:40:51.832068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.836252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.836287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.836330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.840341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.840374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.840402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.844506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.844540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.844568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.848458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.848493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.848521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.852517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.852552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.852590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.856393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.856426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.856454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.860366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.860401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.860429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.864360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.864395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.130 [2024-12-09 05:40:51.864422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.130 [2024-12-09 05:40:51.868407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.130 [2024-12-09 05:40:51.868442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.868470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.872540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.872600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.876507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.876542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.880430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.880463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.880491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.884417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.884450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.884478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.888482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.888517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.888545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.892646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.892679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.892707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.896630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.896663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.896691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.900520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.900553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.900609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.904445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.904478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.904506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.908451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.908486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.908513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.912373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.912406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.912430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.916393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.916427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.916455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.920319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.920353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.920381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.924297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.924331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.924359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.928344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.928378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.928405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.932262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.932298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.932325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.936251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.936286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.936313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.940168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.940201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.940228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.131 [2024-12-09 05:40:51.944158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.131 [2024-12-09 05:40:51.944192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.131 [2024-12-09 05:40:51.944219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.948140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.948192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.948221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.952250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.952283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.952311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.956215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.956249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.956277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.960189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.960222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.964186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.964220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.964248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.968098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.968132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.968159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.972108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.972142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.972170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.976110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.976144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.976171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.980077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.980111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.980139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.983967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.984001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.984029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.987992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.988026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.988054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.991921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.991955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.991983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.995789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.995822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.995849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:51.999610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:51.999643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:51.999671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:05.132 [2024-12-09 05:40:52.003556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:52.003776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:52.003793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:05.132 7579.50 IOPS, 947.44 MiB/s [2024-12-09T05:40:52.104Z] [2024-12-09 05:40:52.009545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17709b0) 00:50:05.132 [2024-12-09 05:40:52.009654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:05.132 [2024-12-09 05:40:52.009669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:05.132 00:50:05.132 Latency(us) 00:50:05.132 [2024-12-09T05:40:52.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:05.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:50:05.132 nvme0n1 : 2.00 7577.51 947.19 0.00 0.00 2108.20 1750.11 12273.11 00:50:05.132 [2024-12-09T05:40:52.104Z] =================================================================================================================== 00:50:05.132 [2024-12-09T05:40:52.104Z] Total : 7577.51 947.19 0.00 0.00 2108.20 1750.11 12273.11 00:50:05.132 { 00:50:05.132 "results": [ 00:50:05.132 { 00:50:05.132 "job": "nvme0n1", 00:50:05.132 "core_mask": "0x2", 00:50:05.132 "workload": "randread", 00:50:05.132 "status": "finished", 00:50:05.132 "queue_depth": 16, 00:50:05.132 "io_size": 131072, 00:50:05.132 "runtime": 2.002638, 00:50:05.133 "iops": 7577.505270548147, 00:50:05.133 "mibps": 947.1881588185183, 00:50:05.133 "io_failed": 0, 00:50:05.133 "io_timeout": 0, 00:50:05.133 "avg_latency_us": 2108.1999738804852, 00:50:05.133 "min_latency_us": 1750.1090909090908, 00:50:05.133 "max_latency_us": 12273.105454545455 00:50:05.133 } 00:50:05.133 ], 00:50:05.133 "core_count": 1 00:50:05.133 } 00:50:05.133 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:50:05.133 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:50:05.133 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:50:05.133 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:50:05.133 | .driver_specific 00:50:05.133 | .nvme_error 00:50:05.133 | .status_code 00:50:05.133 | .command_transient_transport_error' 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 490 > 0 )) 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79979 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79979 ']' 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79979 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79979 00:50:05.392 killing process with pid 79979 00:50:05.392 Received shutdown signal, test time was about 2.000000 seconds 00:50:05.392 00:50:05.392 Latency(us) 00:50:05.392 [2024-12-09T05:40:52.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:05.392 [2024-12-09T05:40:52.364Z] =================================================================================================================== 00:50:05.392 [2024-12-09T05:40:52.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79979' 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79979 00:50:05.392 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79979 00:50:05.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80039 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80039 /var/tmp/bperf.sock 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80039 ']' 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:50:05.652 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:05.653 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:05.653 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:05.653 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:05.653 [2024-12-09 05:40:52.581051] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:50:05.653 [2024-12-09 05:40:52.581331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80039 ] 00:50:05.912 [2024-12-09 05:40:52.729265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:05.912 [2024-12-09 05:40:52.757982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:05.912 [2024-12-09 05:40:52.786315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:05.912 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:05.912 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:50:05.912 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:50:05.912 05:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:50:06.171 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:50:06.171 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:06.171 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:06.171 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:06.171 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:50:06.171 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:50:06.430 nvme0n1 00:50:06.430 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:50:06.430 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:06.430 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:06.699 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:06.699 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:50:06.699 05:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:50:06.699 Running I/O for 2 seconds... 00:50:06.699 [2024-12-09 05:40:53.547927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efb048 00:50:06.699 [2024-12-09 05:40:53.549360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.699 [2024-12-09 05:40:53.549400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:06.699 [2024-12-09 05:40:53.563889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efb8b8 00:50:06.699 [2024-12-09 05:40:53.565675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.699 [2024-12-09 05:40:53.565909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:06.699 [2024-12-09 05:40:53.580108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc128 00:50:06.699 [2024-12-09 05:40:53.581459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.699 [2024-12-09 05:40:53.581494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:50:06.699 [2024-12-09 05:40:53.595102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc998 00:50:06.699 [2024-12-09 05:40:53.596408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.699 [2024-12-09 05:40:53.596453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:50:06.699 [2024-12-09 05:40:53.609593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efd208 00:50:06.699 [2024-12-09 05:40:53.610967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.700 [2024-12-09 05:40:53.611013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:50:06.700 [2024-12-09 05:40:53.624119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efda78 00:50:06.700 [2024-12-09 05:40:53.625394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.700 [2024-12-09 05:40:53.625438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:50:06.700 [2024-12-09 05:40:53.638643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efe2e8 00:50:06.700 [2024-12-09 05:40:53.639927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.700 [2024-12-09 05:40:53.639971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:50:06.700 [2024-12-09 05:40:53.652850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efeb58 00:50:06.700 [2024-12-09 05:40:53.654270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.700 [2024-12-09 05:40:53.654302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.674047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efef90 00:50:06.959 [2024-12-09 05:40:53.676398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.676443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.688705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efeb58 00:50:06.959 [2024-12-09 05:40:53.691134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.691178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.703268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efe2e8 00:50:06.959 [2024-12-09 05:40:53.705534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.705586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.717658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efda78 00:50:06.959 [2024-12-09 05:40:53.719904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.719949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.732157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efd208 00:50:06.959 [2024-12-09 05:40:53.734439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.734510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.746327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc998 00:50:06.959 [2024-12-09 05:40:53.748468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.748510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.760017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc128 00:50:06.959 [2024-12-09 05:40:53.762088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.762152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.773572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efb8b8 00:50:06.959 [2024-12-09 05:40:53.775780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.959 [2024-12-09 05:40:53.775809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:50:06.959 [2024-12-09 05:40:53.787118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efb048 00:50:06.960 [2024-12-09 05:40:53.789211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.789254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.800674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efa7d8 00:50:06.960 [2024-12-09 05:40:53.802851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.802894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.814292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef9f68 00:50:06.960 [2024-12-09 05:40:53.816334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.827794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef96f8 00:50:06.960 [2024-12-09 05:40:53.829832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.829861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.841392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef8e88 00:50:06.960 [2024-12-09 05:40:53.843513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.843556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.855549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef8618 00:50:06.960 [2024-12-09 05:40:53.857512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.857556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.869044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef7da8 00:50:06.960 [2024-12-09 05:40:53.871194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.871239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.882683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef7538 00:50:06.960 [2024-12-09 05:40:53.884612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.884657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.896044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef6cc8 00:50:06.960 [2024-12-09 05:40:53.897963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.898007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.909563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef6458 00:50:06.960 [2024-12-09 05:40:53.911515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.911560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:50:06.960 [2024-12-09 05:40:53.923670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef5be8 00:50:06.960 [2024-12-09 05:40:53.925847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:06.960 [2024-12-09 05:40:53.925879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:53.938322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef5378 00:50:07.219 [2024-12-09 05:40:53.940288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:53.940331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:53.952042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef4b08 00:50:07.219 [2024-12-09 05:40:53.953939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:53.953999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:53.965462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef4298 00:50:07.219 [2024-12-09 05:40:53.967413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:53.967456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:53.979224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef3a28 00:50:07.219 [2024-12-09 05:40:53.981107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:53.981148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:53.992767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef31b8 00:50:07.219 [2024-12-09 05:40:53.994692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:53.994721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.006298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef2948 00:50:07.219 [2024-12-09 05:40:54.008181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.008224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.020046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef20d8 00:50:07.219 [2024-12-09 05:40:54.021837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.021880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.033431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef1868 00:50:07.219 [2024-12-09 05:40:54.035319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.035360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.047039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef0ff8 00:50:07.219 [2024-12-09 05:40:54.048820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.048848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.062071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef0788 00:50:07.219 [2024-12-09 05:40:54.064220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.064264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.079070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeff18 00:50:07.219 [2024-12-09 05:40:54.080980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.081023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.093723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eef6a8 00:50:07.219 [2024-12-09 05:40:54.095440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.095483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.107429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeee38 00:50:07.219 [2024-12-09 05:40:54.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.109250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.121128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eee5c8 00:50:07.219 [2024-12-09 05:40:54.122978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.123022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.134770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eedd58 00:50:07.219 [2024-12-09 05:40:54.136423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.136466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.148402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eed4e8 00:50:07.219 [2024-12-09 05:40:54.150095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.150140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.161949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eecc78 00:50:07.219 [2024-12-09 05:40:54.163636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.219 [2024-12-09 05:40:54.163661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:50:07.219 [2024-12-09 05:40:54.175454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eec408 00:50:07.219 [2024-12-09 05:40:54.177125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.220 [2024-12-09 05:40:54.177168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:50:07.478 [2024-12-09 05:40:54.189608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eebb98 00:50:07.478 [2024-12-09 05:40:54.191439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.478 [2024-12-09 05:40:54.191484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:50:07.478 [2024-12-09 05:40:54.203687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeb328 00:50:07.478 [2024-12-09 05:40:54.205267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.478 [2024-12-09 05:40:54.205311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:50:07.478 [2024-12-09 05:40:54.217281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeaab8 00:50:07.479 [2024-12-09 05:40:54.218961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.219020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.230994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eea248 00:50:07.479 [2024-12-09 05:40:54.232547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.232598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.244504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee99d8 00:50:07.479 [2024-12-09 05:40:54.246089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.246135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.258205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee9168 00:50:07.479 [2024-12-09 05:40:54.259752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.259780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.271730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee88f8 00:50:07.479 [2024-12-09 05:40:54.273226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.273270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.285296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee8088 00:50:07.479 [2024-12-09 05:40:54.286902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.286946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.299053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee7818 00:50:07.479 [2024-12-09 05:40:54.300520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.300564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.312554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee6fa8 00:50:07.479 [2024-12-09 05:40:54.314025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.314067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.326065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee6738 00:50:07.479 [2024-12-09 05:40:54.327549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.327599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.339922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee5ec8 00:50:07.479 [2024-12-09 05:40:54.341383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.341426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.353568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee5658 00:50:07.479 [2024-12-09 05:40:54.355061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.355105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.367127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee4de8 00:50:07.479 [2024-12-09 05:40:54.368668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.368698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.380790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee4578 00:50:07.479 [2024-12-09 05:40:54.382194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.382225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.394439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee3d08 00:50:07.479 [2024-12-09 05:40:54.395895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.395924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.408074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee3498 00:50:07.479 [2024-12-09 05:40:54.409437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.409481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.421544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee2c28 00:50:07.479 [2024-12-09 05:40:54.423024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.423067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:50:07.479 [2024-12-09 05:40:54.435246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee23b8 00:50:07.479 [2024-12-09 05:40:54.436568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.479 [2024-12-09 05:40:54.436639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:50:07.738 [2024-12-09 05:40:54.449370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee1b48 00:50:07.738 [2024-12-09 05:40:54.450924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.738 [2024-12-09 05:40:54.450982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:07.738 [2024-12-09 05:40:54.463494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee12d8 00:50:07.738 [2024-12-09 05:40:54.464788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.738 [2024-12-09 05:40:54.464832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:50:07.738 [2024-12-09 05:40:54.477019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee0a68 00:50:07.739 [2024-12-09 05:40:54.478348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.478378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.490632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee01f8 00:50:07.739 [2024-12-09 05:40:54.491928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.491972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.504106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016edf988 00:50:07.739 [2024-12-09 05:40:54.505350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.505393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.517650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016edf118 00:50:07.739 [2024-12-09 05:40:54.518901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.518943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.531135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ede8a8 00:50:07.739 [2024-12-09 05:40:54.532366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.532409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:50:07.739 18091.00 IOPS, 70.67 MiB/s [2024-12-09T05:40:54.711Z] [2024-12-09 05:40:54.546257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ede038 00:50:07.739 [2024-12-09 05:40:54.547446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.547490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.565862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ede038 00:50:07.739 [2024-12-09 05:40:54.568134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.568177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.579670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ede8a8 00:50:07.739 [2024-12-09 05:40:54.581874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.581903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.593658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016edf118 00:50:07.739 [2024-12-09 05:40:54.595852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.595882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.607204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016edf988 00:50:07.739 [2024-12-09 05:40:54.609386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.609429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.620709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee01f8 00:50:07.739 [2024-12-09 05:40:54.622974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.623017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.634204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee0a68 00:50:07.739 [2024-12-09 05:40:54.636340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.636382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.647774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee12d8 00:50:07.739 [2024-12-09 05:40:54.649819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.649862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.661373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee1b48 00:50:07.739 [2024-12-09 05:40:54.663522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.663563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.674963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee23b8 00:50:07.739 [2024-12-09 05:40:54.677046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.677088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.688514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee2c28 00:50:07.739 [2024-12-09 05:40:54.690678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.690720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:50:07.739 [2024-12-09 05:40:54.702023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee3498 00:50:07.739 [2024-12-09 05:40:54.704183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.739 [2024-12-09 05:40:54.704227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.716748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee3d08 00:50:07.998 [2024-12-09 05:40:54.718926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.718987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.730545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee4578 00:50:07.998 [2024-12-09 05:40:54.732562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.745590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee4de8 00:50:07.998 [2024-12-09 05:40:54.748084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.748129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.761881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee5658 00:50:07.998 [2024-12-09 05:40:54.764103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.764147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.776878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee5ec8 00:50:07.998 [2024-12-09 05:40:54.778964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.779007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.791302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee6738 00:50:07.998 [2024-12-09 05:40:54.793328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.793372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.805671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee6fa8 00:50:07.998 [2024-12-09 05:40:54.807741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.807785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.819902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee7818 00:50:07.998 [2024-12-09 05:40:54.822003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.822047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.834672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee8088 00:50:07.998 [2024-12-09 05:40:54.836877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.836908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.849118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee88f8 00:50:07.998 [2024-12-09 05:40:54.851306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.851353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.864098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee9168 00:50:07.998 [2024-12-09 05:40:54.866016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.866060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.878766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ee99d8 00:50:07.998 [2024-12-09 05:40:54.880676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.880711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.893212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eea248 00:50:07.998 [2024-12-09 05:40:54.895217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.895260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.907960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeaab8 00:50:07.998 [2024-12-09 05:40:54.909856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.909886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.922121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeb328 00:50:07.998 [2024-12-09 05:40:54.923964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.924007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.935858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eebb98 00:50:07.998 [2024-12-09 05:40:54.937623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.937656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.949427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eec408 00:50:07.998 [2024-12-09 05:40:54.951281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.951325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:50:07.998 [2024-12-09 05:40:54.963189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eecc78 00:50:07.998 [2024-12-09 05:40:54.965166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.998 [2024-12-09 05:40:54.965208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:50:08.255 [2024-12-09 05:40:54.977741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eed4e8 00:50:08.256 [2024-12-09 05:40:54.979495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:54.979538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:54.991390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eedd58 00:50:08.256 [2024-12-09 05:40:54.993143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:54.993185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.005108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eee5c8 00:50:08.256 [2024-12-09 05:40:55.006890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.006919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.018618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeee38 00:50:08.256 [2024-12-09 05:40:55.020263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.020306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.032189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eef6a8 00:50:08.256 [2024-12-09 05:40:55.033870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.033898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.045679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016eeff18 00:50:08.256 [2024-12-09 05:40:55.047347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.047389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.059359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef0788 00:50:08.256 [2024-12-09 05:40:55.061066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.061107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.073117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef0ff8 00:50:08.256 [2024-12-09 05:40:55.075039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.075082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.088887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef1868 00:50:08.256 [2024-12-09 05:40:55.090944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.091004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.105112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef20d8 00:50:08.256 [2024-12-09 05:40:55.107039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.107082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.119910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef2948 00:50:08.256 [2024-12-09 05:40:55.121444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.121487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.133445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef31b8 00:50:08.256 [2024-12-09 05:40:55.135230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.135288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.147354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef3a28 00:50:08.256 [2024-12-09 05:40:55.148925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.148983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.160952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef4298 00:50:08.256 [2024-12-09 05:40:55.162615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.162664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.174673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef4b08 00:50:08.256 [2024-12-09 05:40:55.176166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.176210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.188366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef5378 00:50:08.256 [2024-12-09 05:40:55.189890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.189919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.202201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef5be8 00:50:08.256 [2024-12-09 05:40:55.203778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.203820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:50:08.256 [2024-12-09 05:40:55.216022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef6458 00:50:08.256 [2024-12-09 05:40:55.217460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.256 [2024-12-09 05:40:55.217503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.230574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef6cc8 00:50:08.514 [2024-12-09 05:40:55.232017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.232060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.244341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef7538 00:50:08.514 [2024-12-09 05:40:55.245796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.245823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.258028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef7da8 00:50:08.514 [2024-12-09 05:40:55.259473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.259515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.271744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef8618 00:50:08.514 [2024-12-09 05:40:55.273123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.273167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.285381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef8e88 00:50:08.514 [2024-12-09 05:40:55.286848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.286877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.299004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef96f8 00:50:08.514 [2024-12-09 05:40:55.300358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.300401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.312707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016ef9f68 00:50:08.514 [2024-12-09 05:40:55.314076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.314119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.326717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efa7d8 00:50:08.514 [2024-12-09 05:40:55.328051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.328093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.340393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efb048 00:50:08.514 [2024-12-09 05:40:55.341724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.341751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:08.514 [2024-12-09 05:40:55.353927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efb8b8 00:50:08.514 [2024-12-09 05:40:55.355258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.514 [2024-12-09 05:40:55.355301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.367572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc128 00:50:08.515 [2024-12-09 05:40:55.368873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.368902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.381122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc998 00:50:08.515 [2024-12-09 05:40:55.382442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.382485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.394838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efd208 00:50:08.515 [2024-12-09 05:40:55.396079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.396122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.408415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efda78 00:50:08.515 [2024-12-09 05:40:55.409661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.409722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.421920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efe2e8 00:50:08.515 [2024-12-09 05:40:55.423173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.423216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.435635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efeb58 00:50:08.515 [2024-12-09 05:40:55.436813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.436855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.454877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efef90 00:50:08.515 [2024-12-09 05:40:55.457039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.457083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.468541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efeb58 00:50:08.515 [2024-12-09 05:40:55.470853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.515 [2024-12-09 05:40:55.470882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:50:08.515 [2024-12-09 05:40:55.482644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efe2e8 00:50:08.773 [2024-12-09 05:40:55.485120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.773 [2024-12-09 05:40:55.485163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:50:08.773 [2024-12-09 05:40:55.496904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efda78 00:50:08.773 [2024-12-09 05:40:55.499123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.773 [2024-12-09 05:40:55.499169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:50:08.773 [2024-12-09 05:40:55.510850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efd208 00:50:08.773 [2024-12-09 05:40:55.512946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.773 [2024-12-09 05:40:55.512988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:50:08.773 [2024-12-09 05:40:55.524355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x825ae0) with pdu=0x200016efc998 00:50:08.773 [2024-12-09 05:40:55.526580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:08.773 [2024-12-09 05:40:55.526614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:50:08.773 00:50:08.773 Latency(us) 00:50:08.773 [2024-12-09T05:40:55.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:08.773 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:50:08.773 nvme0n1 : 2.00 18076.30 70.61 0.00 0.00 7074.96 5510.98 28478.37 00:50:08.773 [2024-12-09T05:40:55.745Z] =================================================================================================================== 00:50:08.773 [2024-12-09T05:40:55.745Z] Total : 18076.30 70.61 0.00 0.00 7074.96 5510.98 28478.37 00:50:08.773 { 00:50:08.773 "results": [ 00:50:08.773 { 00:50:08.773 "job": "nvme0n1", 00:50:08.773 "core_mask": "0x2", 00:50:08.773 "workload": "randwrite", 00:50:08.773 "status": "finished", 00:50:08.773 "queue_depth": 128, 00:50:08.773 "io_size": 4096, 00:50:08.773 "runtime": 2.001682, 00:50:08.773 "iops": 18076.297833522007, 00:50:08.773 "mibps": 70.61053841219534, 00:50:08.773 "io_failed": 0, 00:50:08.773 "io_timeout": 0, 00:50:08.773 "avg_latency_us": 7074.956430066355, 00:50:08.773 "min_latency_us": 5510.981818181818, 00:50:08.773 "max_latency_us": 28478.37090909091 00:50:08.773 } 00:50:08.773 ], 00:50:08.773 "core_count": 1 00:50:08.773 } 00:50:08.773 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:50:08.773 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:50:08.773 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:50:08.773 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:50:08.773 | .driver_specific 00:50:08.773 | .nvme_error 00:50:08.773 | .status_code 00:50:08.773 | .command_transient_transport_error' 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80039 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80039 ']' 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80039 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80039 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:09.032 killing process with pid 80039 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80039' 00:50:09.032 Received shutdown signal, test time was about 2.000000 seconds 00:50:09.032 00:50:09.032 Latency(us) 00:50:09.032 [2024-12-09T05:40:56.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:09.032 [2024-12-09T05:40:56.004Z] =================================================================================================================== 00:50:09.032 [2024-12-09T05:40:56.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80039 00:50:09.032 05:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80039 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80086 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80086 /var/tmp/bperf.sock 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80086 ']' 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:09.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:09.290 05:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:09.290 [2024-12-09 05:40:56.090724] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:50:09.290 I/O size of 131072 is greater than zero copy threshold (65536). 00:50:09.290 Zero copy mechanism will not be used. 00:50:09.290 [2024-12-09 05:40:56.090822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80086 ] 00:50:09.290 [2024-12-09 05:40:56.237594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:09.548 [2024-12-09 05:40:56.267409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:09.548 [2024-12-09 05:40:56.295989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:10.114 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:10.114 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:50:10.114 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:50:10.114 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:50:10.372 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:50:10.372 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:10.372 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:10.372 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:10.372 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:50:10.372 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:50:10.629 nvme0n1 00:50:10.630 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:50:10.630 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:10.630 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:10.630 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:10.630 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:50:10.630 05:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:50:10.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:50:10.888 Zero copy mechanism will not be used. 00:50:10.888 Running I/O for 2 seconds... 00:50:10.888 [2024-12-09 05:40:57.689512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.689621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.689667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.694371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.694448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.694470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.699264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.699381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.699402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.703985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.704089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.704109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.708629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.708733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.708753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.713162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.713259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.713280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.717684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.717807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.722213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.722301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.722323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.726840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.726968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.726989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.731441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.731525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.731545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.736051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.736154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.736174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.740670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.740742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.740763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.745168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.745239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.749722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.749819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.749840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.754239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.754326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.754346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.758955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.759043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.888 [2024-12-09 05:40:57.759063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.888 [2024-12-09 05:40:57.763602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.888 [2024-12-09 05:40:57.763700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.763720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.768357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.768452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.768473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.773043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.773139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.773159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.777625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.777727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.777747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.782082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.782203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.782223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.786712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.786784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.786804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.791254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.791326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.791346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.795859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.795956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.800350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.800446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.800466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.804997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.805120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.809529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.809643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.809663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.814069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.814179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.814200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.818699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.818824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.818844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.823199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.823302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.823322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.827796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.827886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.827906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.832291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.832388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.832408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.836946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.837035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.837055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.841419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.841492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.841512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.845909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.845981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.846001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.850388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.850493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.850527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:10.889 [2024-12-09 05:40:57.855280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:10.889 [2024-12-09 05:40:57.855351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.889 [2024-12-09 05:40:57.855371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.860306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.860403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.860424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.865099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.865195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.865215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.869687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.869816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.869838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.874198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.874283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.874305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.878883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.878977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.878997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.883431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.883527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.883547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.887978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.888073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.888093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.892601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.148 [2024-12-09 05:40:57.892673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.148 [2024-12-09 05:40:57.892693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.148 [2024-12-09 05:40:57.897198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.897269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.897289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.901723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.901819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.901840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.906313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.906404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.906439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.910936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.911031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.911051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.915468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.915539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.920124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.920195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.920215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.924649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.924734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.924754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.929122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.929193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.929213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.933677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.933780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.933800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.938056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.938189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.938209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.942734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.942834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.942854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.947207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.947301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.947320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.951752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.951846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.951867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.956292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.956388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.956408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.960932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.961029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.961049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.965546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.965655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.965679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.970044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.970116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.970176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.974725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.974821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.974841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.979265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.979338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.979358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.983898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.984001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.984021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.988407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.988494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.988514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.993032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.993127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.993146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:57.997503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:57.997607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:57.997627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:58.001939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:58.002039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:58.002059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:58.006548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:58.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:58.006665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:58.011144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:58.011299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:58.011320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.149 [2024-12-09 05:40:58.015885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.149 [2024-12-09 05:40:58.015980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.149 [2024-12-09 05:40:58.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.020403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.020497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.020517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.025029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.025157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.025177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.029567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.029697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.029716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.033984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.034055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.034075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.038583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.038702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.043107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.043178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.043198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.047680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.047783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.047802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.052208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.052280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.052301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.056846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.056961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.061302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.061403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.061423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.065837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.065933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.065953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.070330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.070411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.070431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.074904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.075019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.079373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.079445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.079465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.084013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.084084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.084103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.088548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.088662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.088682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.093087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.093170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.093190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.097666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.097769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.097789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.102110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.102238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.102258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.106896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.106992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.107012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.111378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.111469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.111489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.150 [2024-12-09 05:40:58.116385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.150 [2024-12-09 05:40:58.116478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.150 [2024-12-09 05:40:58.116511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.121593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.121714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.127580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.127732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.127756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.133104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.133232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.138838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.138956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.138978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.144585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.144657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.144678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.149874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.150032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.150052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.155552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.155715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.155738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.160907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.161045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.161065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.166085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.166246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.166268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.171309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.171396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.171416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.176397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.176492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.181471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.181565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.181620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.186667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.186752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.186773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.191564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.191664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.191684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.196263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.196356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.196377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.201044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.201156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.201176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.205628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.205699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.205718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.210094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.210215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.214763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.214857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.214877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.219216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.219311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.219331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.223892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.223995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.224015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.228370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.228465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.228485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.233087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.233183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.411 [2024-12-09 05:40:58.233203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.411 [2024-12-09 05:40:58.237650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.411 [2024-12-09 05:40:58.237745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.237765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.242119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.242241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.242262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.246789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.246885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.246905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.251859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.251978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.251999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.257047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.257154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.257175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.262339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.262477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.262513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.267718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.267830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.267853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.273141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.273246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.273267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.278297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.278387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.278410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.283292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.283396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.283416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.288303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.288412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.293437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.293538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.293558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.298206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.298291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.298313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.302944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.303033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.303053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.307768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.307860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.307881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.312409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.312497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.312517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.317123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.317211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.317232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.322030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.322157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.322178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.326832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.326929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.331446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.331553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.331584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.336101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.336198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.336218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.340911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.340985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.341005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.345546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.345647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.345667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.350203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.350281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.350302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.355073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.355145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.355165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.359808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.359906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.359926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.364455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.364553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.412 [2024-12-09 05:40:58.364586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.412 [2024-12-09 05:40:58.369470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.412 [2024-12-09 05:40:58.369571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.413 [2024-12-09 05:40:58.369591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.413 [2024-12-09 05:40:58.374104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.413 [2024-12-09 05:40:58.374234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.413 [2024-12-09 05:40:58.374256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.413 [2024-12-09 05:40:58.379274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.413 [2024-12-09 05:40:58.379364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.413 [2024-12-09 05:40:58.379385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.384437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.384565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.384604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.389397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.389487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.389507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.394159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.394262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.394283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.398927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.399015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.399034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.403730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.403836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.403857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.408446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.408517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.408537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.413170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.413259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.413279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.418085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.418231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.422906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.422995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.423016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.427539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.427623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.427644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.432490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.432603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.432624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.437290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.437387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.442041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.442175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.442196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.446691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.446802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.446822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.451201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.451299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.451320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.455851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.455939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.455959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.460456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.673 [2024-12-09 05:40:58.460548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.673 [2024-12-09 05:40:58.460568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.673 [2024-12-09 05:40:58.465080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.465176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.465196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.469673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.469795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.469814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.474228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.474317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.474338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.478839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.478934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.478954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.483322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.483394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.483413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.487943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.488053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.488074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.492467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.492560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.492607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.497112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.497207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.497227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.501656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.501713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.501733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.506072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.506191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.506211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.510765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.510838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.510858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.515315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.515408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.515427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.519911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.520006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.520026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.524433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.524504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.524524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.529028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.529122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.529141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.533646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.533741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.533760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.538038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.538158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.538178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.542710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.542805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.547194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.547265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.547285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.551793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.551888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.551908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.556309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.556412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.556432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.560949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.561042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.561062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.565491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.565595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.565615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.569973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.570044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.570063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.574528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.574625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.574656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.579055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.579125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.579145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.583655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.674 [2024-12-09 05:40:58.583727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.674 [2024-12-09 05:40:58.583747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.674 [2024-12-09 05:40:58.588194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.588280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.588299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.592740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.592829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.592849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.597311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.597406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.597426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.601856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.601953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.601973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.606390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.606480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.606500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.611009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.611134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.615679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.615782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.615802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.620475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.620563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.620584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.625093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.625175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.625195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.629740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.629843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.629863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.634303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.634401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.634423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.675 [2024-12-09 05:40:58.639229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.675 [2024-12-09 05:40:58.639339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.675 [2024-12-09 05:40:58.639360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.937 [2024-12-09 05:40:58.644304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.937 [2024-12-09 05:40:58.644405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.937 [2024-12-09 05:40:58.644426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.937 [2024-12-09 05:40:58.649197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.937 [2024-12-09 05:40:58.649284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.937 [2024-12-09 05:40:58.649303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.937 [2024-12-09 05:40:58.653830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.937 [2024-12-09 05:40:58.653914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.937 [2024-12-09 05:40:58.653934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.937 [2024-12-09 05:40:58.658414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.937 [2024-12-09 05:40:58.658549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.937 [2024-12-09 05:40:58.658569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.663133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.663228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.663248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.667739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.667836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.667856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.672210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.672280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.676806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.676876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.676896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.681252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.681345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.681365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 6562.00 IOPS, 820.25 MiB/s [2024-12-09T05:40:58.910Z] [2024-12-09 05:40:58.686875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.686947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.686967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.691387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.691491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.691510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.696088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.696188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.696208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.700696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.700753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.700773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.705274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.705345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.705365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.709825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.709903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.709922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.714324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.714383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.714403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.718896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.719000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.719020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.723382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.723475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.723495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.727990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.728060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.728080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.732570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.732653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.737086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.737158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.737178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.741702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.741800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.741820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.746410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.746542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.746562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.751209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.751311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.751330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.755866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.755961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.755981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.760408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.760477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.760497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.764996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.765098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.765119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.769542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.769651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.769672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.773991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.774086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.774106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.778648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.778752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.938 [2024-12-09 05:40:58.778771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.938 [2024-12-09 05:40:58.783123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.938 [2024-12-09 05:40:58.783193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.783213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.787724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.787821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.787841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.792258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.792362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.792382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.796997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.797100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.797121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.801613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.801730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.801750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.806082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.806204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.806224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.810743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.810837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.810856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.815266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.815345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.815365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.819845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.819954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.819974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.824361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.824442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.824462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.828950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.829060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.829080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.833523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.833621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.833654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.838032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.838105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.838124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.842749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.842838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.842858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.847234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.847329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.847349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.851874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.851962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.851982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.856410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.856527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.856547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.861011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.861091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.861111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.865568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.865665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.865685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.870062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.870159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.870181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.874744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.874815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.874836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.879212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.879283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.879303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.883731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.883834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.883855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.888197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.888299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.888318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.892831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.892925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.892944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.897354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.897452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.897471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:11.939 [2024-12-09 05:40:58.902175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:11.939 [2024-12-09 05:40:58.902260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:11.939 [2024-12-09 05:40:58.902284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.222 [2024-12-09 05:40:58.907616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.222 [2024-12-09 05:40:58.907713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.222 [2024-12-09 05:40:58.907746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.222 [2024-12-09 05:40:58.913867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.222 [2024-12-09 05:40:58.913952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.222 [2024-12-09 05:40:58.913975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.222 [2024-12-09 05:40:58.919915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.222 [2024-12-09 05:40:58.920024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.222 [2024-12-09 05:40:58.920045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.222 [2024-12-09 05:40:58.925679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.222 [2024-12-09 05:40:58.925784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.222 [2024-12-09 05:40:58.925805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.222 [2024-12-09 05:40:58.930624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.222 [2024-12-09 05:40:58.930713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.930733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.935310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.935414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.935434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.940073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.940162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.940182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.944774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.944864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.944884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.949416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.949506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.949527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.954079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.954233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.954254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.958823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.958920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.958940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.963321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.963421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.963440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.967923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.967993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.968013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.972373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.972475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.972496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.976982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.977085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.977104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.981517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.981629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.981665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.986082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.986193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.986214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.990724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.990827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.990847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.995220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.995314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.995334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:58.999776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:58.999861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:58.999880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.004348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.004467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.004488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.009059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.009157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.009177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.014230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.014325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.014345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.018921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.019024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.019044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.023424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.023520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.023540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.028123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.028218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.028238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.032735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.032832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.032851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.037350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.037421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.037441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.041899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.041995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.042015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.046375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.046452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.046487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.050985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.223 [2024-12-09 05:40:59.051080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.223 [2024-12-09 05:40:59.051100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.223 [2024-12-09 05:40:59.055580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.055662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.055682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.060093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.060196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.060217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.064642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.064735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.064755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.069153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.069222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.069242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.073692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.073781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.073801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.078112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.078236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.078255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.082733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.082804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.082823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.087210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.087289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.087308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.091799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.091894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.091914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.096302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.096398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.096417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.100902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.101006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.101025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.105390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.105495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.105514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.109980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.110082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.110103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.114694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.114763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.114783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.119208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.119305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.119325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.123821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.123917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.123937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.128331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.128428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.128448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.132956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.133051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.133070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.137516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.137609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.137629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.142074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.142214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.142234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.146739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.146848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.146867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.151226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.151319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.151339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.155782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.155877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.160643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.160753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.160775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.165671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.165747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.165769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.170865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.171042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.176449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.224 [2024-12-09 05:40:59.176542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.224 [2024-12-09 05:40:59.176595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.224 [2024-12-09 05:40:59.182262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.225 [2024-12-09 05:40:59.182351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.225 [2024-12-09 05:40:59.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.497 [2024-12-09 05:40:59.188721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.497 [2024-12-09 05:40:59.188824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.497 [2024-12-09 05:40:59.188847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.497 [2024-12-09 05:40:59.195120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.497 [2024-12-09 05:40:59.195219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.497 [2024-12-09 05:40:59.195241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.497 [2024-12-09 05:40:59.201622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.497 [2024-12-09 05:40:59.201766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.497 [2024-12-09 05:40:59.201789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.497 [2024-12-09 05:40:59.207136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.497 [2024-12-09 05:40:59.207264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.497 [2024-12-09 05:40:59.207284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.497 [2024-12-09 05:40:59.211989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.212076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.212097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.216751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.216841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.216862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.221341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.221430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.221451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.226040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.226153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.226190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.230978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.231111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.235771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.235863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.235883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.240444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.240540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.240560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.245149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.245245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.245264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.249817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.249919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.249939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.254365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.254443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.254494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.259058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.259162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.259181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.263699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.263822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.263841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.268187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.268284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.268304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.272870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.272973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.272993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.277427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.277531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.277551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.281957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.282060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.282080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.286653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.286756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.286776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.291212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.291308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.291327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.295771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.295860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.498 [2024-12-09 05:40:59.295880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.498 [2024-12-09 05:40:59.300290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.498 [2024-12-09 05:40:59.300385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.300405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.304877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.304980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.305001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.309438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.309535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.309554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.314113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.314243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.314264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.318765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.318855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.318876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.323306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.323394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.323414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.327952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.328056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.328076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.332547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.332631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.332650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.337058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.337155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.337174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.341609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.341713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.341733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.346074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.346188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.346209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.350752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.350842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.350862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.355165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.355288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.359832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.359936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.359956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.364330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.364401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.364420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.369042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.369137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.369157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.373657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.373741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.373760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.378365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.378452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.378487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.382972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.383083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.383102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.499 [2024-12-09 05:40:59.387617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.499 [2024-12-09 05:40:59.387715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.499 [2024-12-09 05:40:59.387735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.392269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.392366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.392385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.396861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.396953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.401398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.401496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.401515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.406071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.406188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.406209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.410837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.410933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.410953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.415398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.415468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.415488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.420136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.420241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.420260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.424748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.424846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.424866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.429249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.429346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.429365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.433940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.434076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.434097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.439121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.439210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.439230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.444088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.444178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.444198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.449220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.449341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.449362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.454592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.454701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.454723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.500 [2024-12-09 05:40:59.460071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.500 [2024-12-09 05:40:59.460164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.500 [2024-12-09 05:40:59.460186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.466545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.466689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.466711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.472866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.472977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.472998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.478265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.478349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.478372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.483103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.483191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.483212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.487785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.487873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.487893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.492518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.492635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.492656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.497364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.794 [2024-12-09 05:40:59.497471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.794 [2024-12-09 05:40:59.497491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.794 [2024-12-09 05:40:59.502191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.502286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.502307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.506986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.507084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.507104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.512017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.512107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.512127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.516763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.516875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.516896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.521503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.521620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.521640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.526437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.526570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.526606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.531192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.531296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.531317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.535925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.536030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.536050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.540785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.540880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.540900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.545450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.545555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.545587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.550244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.550314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.550334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.554924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.555029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.559740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.559830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.559849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.564370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.564459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.564478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.568985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.569090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.569110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.573814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.573904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.573924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.578383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.578490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.578511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.583144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.583249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.583269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.587978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.588071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.592657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.592747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.592767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.597513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.597610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.597631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.602224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.602315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.602336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.607033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.607123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.607143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.612019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.612124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.612145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.616788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.616877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.616898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.621429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.621499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.621519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.795 [2024-12-09 05:40:59.626155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.795 [2024-12-09 05:40:59.626261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.795 [2024-12-09 05:40:59.626282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.630764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.630865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.630886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.635269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.635340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.635360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.639873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.639963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.639983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.644359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.644448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.644468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.648972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.649056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.649075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.653670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.653754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.653774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.658105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.658243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.658264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.662784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.662880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.662899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.667282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.667371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.667390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.671836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.671933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.671953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.676416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.676487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.676507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.796 [2024-12-09 05:40:59.680996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.681099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.681120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:50:12.796 6553.50 IOPS, 819.19 MiB/s [2024-12-09T05:40:59.768Z] [2024-12-09 05:40:59.686466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8125b0) with pdu=0x200016eff3c8 00:50:12.796 [2024-12-09 05:40:59.686612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.796 [2024-12-09 05:40:59.686633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.796 00:50:12.796 Latency(us) 00:50:12.796 [2024-12-09T05:40:59.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:12.796 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:50:12.796 nvme0n1 : 2.00 6551.14 818.89 0.00 0.00 2437.08 1906.50 7804.74 00:50:12.796 [2024-12-09T05:40:59.768Z] =================================================================================================================== 00:50:12.796 [2024-12-09T05:40:59.768Z] Total : 6551.14 818.89 0.00 0.00 2437.08 1906.50 7804.74 00:50:12.796 { 00:50:12.796 "results": [ 00:50:12.796 { 00:50:12.796 "job": "nvme0n1", 00:50:12.796 "core_mask": "0x2", 00:50:12.796 "workload": "randwrite", 00:50:12.796 "status": "finished", 00:50:12.796 "queue_depth": 16, 00:50:12.796 "io_size": 131072, 00:50:12.796 "runtime": 2.003925, 00:50:12.796 "iops": 6551.1433811145625, 00:50:12.796 "mibps": 818.8929226393203, 00:50:12.796 "io_failed": 0, 00:50:12.796 "io_timeout": 0, 00:50:12.796 "avg_latency_us": 2437.083126696582, 00:50:12.796 "min_latency_us": 1906.5018181818182, 00:50:12.796 "max_latency_us": 7804.741818181818 00:50:12.796 } 00:50:12.796 ], 00:50:12.796 "core_count": 1 00:50:12.796 } 00:50:12.796 05:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:50:12.796 05:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:50:12.796 05:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:50:12.796 05:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:50:12.796 | .driver_specific 00:50:12.796 | .nvme_error 00:50:12.796 | .status_code 00:50:12.796 | .command_transient_transport_error' 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80086 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80086 ']' 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80086 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:13.062 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80086 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:13.322 killing process with pid 80086 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80086' 00:50:13.322 Received shutdown signal, test time was about 2.000000 seconds 00:50:13.322 00:50:13.322 Latency(us) 00:50:13.322 [2024-12-09T05:41:00.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:13.322 [2024-12-09T05:41:00.294Z] =================================================================================================================== 00:50:13.322 [2024-12-09T05:41:00.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80086 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80086 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79906 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79906 ']' 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79906 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79906 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:13.322 killing process with pid 79906 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79906' 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79906 00:50:13.322 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79906 00:50:13.581 00:50:13.581 real 0m16.143s 00:50:13.581 user 0m31.776s 00:50:13.581 sys 0m4.363s 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:50:13.581 ************************************ 00:50:13.581 END TEST nvmf_digest_error 00:50:13.581 ************************************ 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:50:13.581 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:50:13.581 rmmod nvme_tcp 00:50:13.581 rmmod nvme_fabrics 00:50:13.840 rmmod nvme_keyring 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79906 ']' 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79906 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79906 ']' 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79906 00:50:13.840 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79906) - No such process 00:50:13.840 Process with pid 79906 is not found 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79906 is not found' 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:13.840 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:50:14.099 00:50:14.099 real 0m31.986s 00:50:14.099 user 1m0.939s 00:50:14.099 sys 0m8.917s 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:14.099 ************************************ 00:50:14.099 END TEST nvmf_digest 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:50:14.099 ************************************ 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:14.099 ************************************ 00:50:14.099 START TEST nvmf_host_multipath 00:50:14.099 ************************************ 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:50:14.099 * Looking for test storage... 00:50:14.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:50:14.099 05:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:50:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:14.099 --rc genhtml_branch_coverage=1 00:50:14.099 --rc genhtml_function_coverage=1 00:50:14.099 --rc genhtml_legend=1 00:50:14.099 --rc geninfo_all_blocks=1 00:50:14.099 --rc geninfo_unexecuted_blocks=1 00:50:14.099 00:50:14.099 ' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:50:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:14.099 --rc genhtml_branch_coverage=1 00:50:14.099 --rc genhtml_function_coverage=1 00:50:14.099 --rc genhtml_legend=1 00:50:14.099 --rc geninfo_all_blocks=1 00:50:14.099 --rc geninfo_unexecuted_blocks=1 00:50:14.099 00:50:14.099 ' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:50:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:14.099 --rc genhtml_branch_coverage=1 00:50:14.099 --rc genhtml_function_coverage=1 00:50:14.099 --rc genhtml_legend=1 00:50:14.099 --rc geninfo_all_blocks=1 00:50:14.099 --rc geninfo_unexecuted_blocks=1 00:50:14.099 00:50:14.099 ' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:50:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:14.099 --rc genhtml_branch_coverage=1 00:50:14.099 --rc genhtml_function_coverage=1 00:50:14.099 --rc genhtml_legend=1 00:50:14.099 --rc geninfo_all_blocks=1 00:50:14.099 --rc geninfo_unexecuted_blocks=1 00:50:14.099 00:50:14.099 ' 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:14.099 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:14.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:14.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:50:14.359 Cannot find device "nvmf_init_br" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:50:14.359 Cannot find device "nvmf_init_br2" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:50:14.359 Cannot find device "nvmf_tgt_br" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:50:14.359 Cannot find device "nvmf_tgt_br2" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:50:14.359 Cannot find device "nvmf_init_br" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:50:14.359 Cannot find device "nvmf_init_br2" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:50:14.359 Cannot find device "nvmf_tgt_br" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:50:14.359 Cannot find device "nvmf_tgt_br2" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:50:14.359 Cannot find device "nvmf_br" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:50:14.359 Cannot find device "nvmf_init_if" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:50:14.359 Cannot find device "nvmf_init_if2" 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:14.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:14.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:14.359 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:50:14.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:14.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.388 ms 00:50:14.618 00:50:14.618 --- 10.0.0.3 ping statistics --- 00:50:14.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:14.618 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:50:14.618 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:50:14.618 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:50:14.618 00:50:14.618 --- 10.0.0.4 ping statistics --- 00:50:14.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:14.618 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:14.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:14.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:50:14.618 00:50:14.618 --- 10.0.0.1 ping statistics --- 00:50:14.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:14.618 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:50:14.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:14.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:50:14.618 00:50:14.618 --- 10.0.0.2 ping statistics --- 00:50:14.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:14.618 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80402 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80402 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80402 ']' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:14.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:14.618 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:50:14.618 [2024-12-09 05:41:01.565524] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:50:14.618 [2024-12-09 05:41:01.565651] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:14.877 [2024-12-09 05:41:01.723097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:14.877 [2024-12-09 05:41:01.761698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:14.877 [2024-12-09 05:41:01.761756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:14.877 [2024-12-09 05:41:01.761770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:14.877 [2024-12-09 05:41:01.761780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:14.877 [2024-12-09 05:41:01.761788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:14.877 [2024-12-09 05:41:01.762730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:14.877 [2024-12-09 05:41:01.762744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:14.877 [2024-12-09 05:41:01.799662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80402 00:50:15.136 05:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:50:15.394 [2024-12-09 05:41:02.176097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:15.394 05:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:50:15.652 Malloc0 00:50:15.652 05:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:50:15.909 05:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:50:16.168 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:50:16.426 [2024-12-09 05:41:03.285938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:50:16.426 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:50:16.685 [2024-12-09 05:41:03.497923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80450 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80450 /var/tmp/bdevperf.sock 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80450 ']' 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:16.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:16.685 05:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:50:17.622 05:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:17.622 05:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:50:17.622 05:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:50:17.880 05:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:50:18.138 Nvme0n1 00:50:18.139 05:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:50:18.706 Nvme0n1 00:50:18.706 05:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:50:18.706 05:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:50:19.650 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:50:19.650 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:19.909 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:20.167 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:50:20.167 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:50:20.167 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80495 00:50:20.167 05:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:50:26.731 05:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:50:26.731 05:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:26.731 Attaching 4 probes... 00:50:26.731 @path[10.0.0.3, 4421]: 14721 00:50:26.731 @path[10.0.0.3, 4421]: 15244 00:50:26.731 @path[10.0.0.3, 4421]: 15104 00:50:26.731 @path[10.0.0.3, 4421]: 15160 00:50:26.731 @path[10.0.0.3, 4421]: 15360 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80495 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:26.731 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:50:26.990 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:50:26.990 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80613 00:50:26.990 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:50:26.990 05:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:50:33.563 05:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:50:33.563 05:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:33.563 Attaching 4 probes... 00:50:33.563 @path[10.0.0.3, 4420]: 20059 00:50:33.563 @path[10.0.0.3, 4420]: 20022 00:50:33.563 @path[10.0.0.3, 4420]: 20239 00:50:33.563 @path[10.0.0.3, 4420]: 20204 00:50:33.563 @path[10.0.0.3, 4420]: 20208 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80613 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:50:33.563 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:33.821 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:50:33.821 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80721 00:50:33.821 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:50:33.821 05:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:40.395 Attaching 4 probes... 00:50:40.395 @path[10.0.0.3, 4421]: 15094 00:50:40.395 @path[10.0.0.3, 4421]: 19657 00:50:40.395 @path[10.0.0.3, 4421]: 19435 00:50:40.395 @path[10.0.0.3, 4421]: 19399 00:50:40.395 @path[10.0.0.3, 4421]: 19408 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80721 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:50:40.395 05:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:50:40.395 05:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:50:40.653 05:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:50:40.653 05:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80839 00:50:40.653 05:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:50:40.653 05:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:50:47.210 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:50:47.210 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:50:47.210 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:50:47.210 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:47.210 Attaching 4 probes... 00:50:47.210 00:50:47.210 00:50:47.210 00:50:47.210 00:50:47.210 00:50:47.210 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:50:47.210 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80839 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:50:47.211 05:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:47.211 05:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:47.469 05:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:50:47.469 05:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80957 00:50:47.469 05:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:50:47.469 05:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:54.031 Attaching 4 probes... 00:50:54.031 @path[10.0.0.3, 4421]: 18943 00:50:54.031 @path[10.0.0.3, 4421]: 19206 00:50:54.031 @path[10.0.0.3, 4421]: 19131 00:50:54.031 @path[10.0.0.3, 4421]: 19123 00:50:54.031 @path[10.0.0.3, 4421]: 19310 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80957 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:50:54.031 05:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:50:55.026 05:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:50:55.026 05:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81075 00:50:55.026 05:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:50:55.026 05:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:51:01.592 05:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:51:01.592 05:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:51:01.592 Attaching 4 probes... 00:51:01.592 @path[10.0.0.3, 4420]: 19109 00:51:01.592 @path[10.0.0.3, 4420]: 19412 00:51:01.592 @path[10.0.0.3, 4420]: 19358 00:51:01.592 @path[10.0.0.3, 4420]: 19362 00:51:01.592 @path[10.0.0.3, 4420]: 19312 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81075 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:51:01.592 [2024-12-09 05:41:48.449255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:51:01.592 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:51:01.850 05:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:51:08.413 05:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:51:08.413 05:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81255 00:51:08.413 05:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80402 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:51:08.413 05:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:51:15.000 05:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:51:15.000 05:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:51:15.000 Attaching 4 probes... 00:51:15.000 @path[10.0.0.3, 4421]: 19013 00:51:15.000 @path[10.0.0.3, 4421]: 19425 00:51:15.000 @path[10.0.0.3, 4421]: 19324 00:51:15.000 @path[10.0.0.3, 4421]: 19348 00:51:15.000 @path[10.0.0.3, 4421]: 19122 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81255 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80450 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80450 ']' 00:51:15.000 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80450 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80450 00:51:15.001 killing process with pid 80450 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80450' 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80450 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80450 00:51:15.001 { 00:51:15.001 "results": [ 00:51:15.001 { 00:51:15.001 "job": "Nvme0n1", 00:51:15.001 "core_mask": "0x4", 00:51:15.001 "workload": "verify", 00:51:15.001 "status": "terminated", 00:51:15.001 "verify_range": { 00:51:15.001 "start": 0, 00:51:15.001 "length": 16384 00:51:15.001 }, 00:51:15.001 "queue_depth": 128, 00:51:15.001 "io_size": 4096, 00:51:15.001 "runtime": 55.559878, 00:51:15.001 "iops": 7975.773452922269, 00:51:15.001 "mibps": 31.155365050477613, 00:51:15.001 "io_failed": 0, 00:51:15.001 "io_timeout": 0, 00:51:15.001 "avg_latency_us": 16018.013401714197, 00:51:15.001 "min_latency_us": 428.2181818181818, 00:51:15.001 "max_latency_us": 7015926.69090909 00:51:15.001 } 00:51:15.001 ], 00:51:15.001 "core_count": 1 00:51:15.001 } 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80450 00:51:15.001 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:51:15.001 [2024-12-09 05:41:03.559688] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:51:15.001 [2024-12-09 05:41:03.559780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80450 ] 00:51:15.001 [2024-12-09 05:41:03.710964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:15.001 [2024-12-09 05:41:03.749801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:15.001 [2024-12-09 05:41:03.782794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:15.001 Running I/O for 90 seconds... 00:51:15.001 7829.00 IOPS, 30.58 MiB/s [2024-12-09T05:42:01.973Z] 7647.50 IOPS, 29.87 MiB/s [2024-12-09T05:42:01.973Z] 7615.67 IOPS, 29.75 MiB/s [2024-12-09T05:42:01.973Z] 7631.50 IOPS, 29.81 MiB/s [2024-12-09T05:42:01.973Z] 7615.80 IOPS, 29.75 MiB/s [2024-12-09T05:42:01.973Z] 7605.17 IOPS, 29.71 MiB/s [2024-12-09T05:42:01.973Z] 7615.86 IOPS, 29.75 MiB/s [2024-12-09T05:42:01.973Z] 7592.75 IOPS, 29.66 MiB/s [2024-12-09T05:42:01.973Z] [2024-12-09 05:41:13.799779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.800203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.800345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.800447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.800552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.800678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.800767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.800881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.800996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.801081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.801181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.801291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.801389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.801492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.801597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.001 [2024-12-09 05:41:13.801677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.801802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.801884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.801974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.802070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.802207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.802293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.802380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.802495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.802606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.802716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.802805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.802883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.802983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.803070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.803152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.803321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.803398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.803483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.803560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.803671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.803747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.803835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.803900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.803979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.804138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.804235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.804324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.804401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.804579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.804681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.804842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.804917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:15.001 [2024-12-09 05:41:13.804987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.001 [2024-12-09 05:41:13.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.805229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.805394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.805568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.805620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.805655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.805977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.805996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.002 [2024-12-09 05:41:13.806913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:15.002 [2024-12-09 05:41:13.806933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.002 [2024-12-09 05:41:13.806947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.806967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.806982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.003 [2024-12-09 05:41:13.807494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.807970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.807989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.003 [2024-12-09 05:41:13.808358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:15.003 [2024-12-09 05:41:13.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.808392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.808663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.808678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:13.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:13.810867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:13.810883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:15.004 7809.33 IOPS, 30.51 MiB/s [2024-12-09T05:42:01.976Z] 8049.20 IOPS, 31.44 MiB/s [2024-12-09T05:42:01.976Z] 8227.18 IOPS, 32.14 MiB/s [2024-12-09T05:42:01.976Z] 8379.58 IOPS, 32.73 MiB/s [2024-12-09T05:42:01.976Z] 8511.62 IOPS, 33.25 MiB/s [2024-12-09T05:42:01.976Z] 8631.07 IOPS, 33.72 MiB/s [2024-12-09T05:42:01.976Z] [2024-12-09 05:41:20.346981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.004 [2024-12-09 05:41:20.347339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:20.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:20.347405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:20.347437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:20.347470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:15.004 [2024-12-09 05:41:20.347488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.004 [2024-12-09 05:41:20.347501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.347916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.347973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.347994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.005 [2024-12-09 05:41:20.348496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.005 [2024-12-09 05:41:20.348733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:15.005 [2024-12-09 05:41:20.348753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.348788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.348822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.348856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.348891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.348926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.348983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.348997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.006 [2024-12-09 05:41:20.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.349960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.349994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.350008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.350027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.350042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.350061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.350075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.350094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.350156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.350173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:15.006 [2024-12-09 05:41:20.350193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.006 [2024-12-09 05:41:20.350208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.350648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.350966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.350980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.351448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.351462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.352099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.007 [2024-12-09 05:41:20.352126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.352157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.352173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.352199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.352214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.352239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:15.007 [2024-12-09 05:41:20.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.007 [2024-12-09 05:41:20.352294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:20.352319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:20.352333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:20.352372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:20.352388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:20.352414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:20.352428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:20.352468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:20.352487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:15.008 8573.00 IOPS, 33.49 MiB/s [2024-12-09T05:42:01.980Z] 8157.25 IOPS, 31.86 MiB/s [2024-12-09T05:42:01.980Z] 8251.06 IOPS, 32.23 MiB/s [2024-12-09T05:42:01.980Z] 8333.11 IOPS, 32.55 MiB/s [2024-12-09T05:42:01.980Z] 8406.11 IOPS, 32.84 MiB/s [2024-12-09T05:42:01.980Z] 8474.20 IOPS, 33.10 MiB/s [2024-12-09T05:42:01.980Z] 8534.29 IOPS, 33.34 MiB/s [2024-12-09T05:42:01.980Z] [2024-12-09 05:41:27.480251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.480601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.480977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.480996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.481009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.481049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.481083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.481116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.481149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.008 [2024-12-09 05:41:27.481428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.008 [2024-12-09 05:41:27.481460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:15.008 [2024-12-09 05:41:27.481488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.481733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.481766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.481799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.481832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.481865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.481899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.481941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.481978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.009 [2024-12-09 05:41:27.482695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.482729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.482764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.482798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:15.009 [2024-12-09 05:41:27.482817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.009 [2024-12-09 05:41:27.482831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.482850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.482865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.482883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.482905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.482925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.482941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.482960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.482975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.010 [2024-12-09 05:41:27.483861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.483983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.483998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:15.010 [2024-12-09 05:41:27.484241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.010 [2024-12-09 05:41:27.484261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.484296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.484329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.484399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.484435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.484666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.484687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:27.485340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:27.485709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:27.485728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:15.011 8544.91 IOPS, 33.38 MiB/s [2024-12-09T05:42:01.983Z] 8173.39 IOPS, 31.93 MiB/s [2024-12-09T05:42:01.983Z] 7832.83 IOPS, 30.60 MiB/s [2024-12-09T05:42:01.983Z] 7519.52 IOPS, 29.37 MiB/s [2024-12-09T05:42:01.983Z] 7230.31 IOPS, 28.24 MiB/s [2024-12-09T05:42:01.983Z] 6962.52 IOPS, 27.20 MiB/s [2024-12-09T05:42:01.983Z] 6713.86 IOPS, 26.23 MiB/s [2024-12-09T05:42:01.983Z] 6499.97 IOPS, 25.39 MiB/s [2024-12-09T05:42:01.983Z] 6597.97 IOPS, 25.77 MiB/s [2024-12-09T05:42:01.983Z] 6694.81 IOPS, 26.15 MiB/s [2024-12-09T05:42:01.983Z] 6785.59 IOPS, 26.51 MiB/s [2024-12-09T05:42:01.983Z] 6872.82 IOPS, 26.85 MiB/s [2024-12-09T05:42:01.983Z] 6953.03 IOPS, 27.16 MiB/s [2024-12-09T05:42:01.983Z] 7024.77 IOPS, 27.44 MiB/s [2024-12-09T05:42:01.983Z] [2024-12-09 05:41:40.877935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.877990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.011 [2024-12-09 05:41:40.878360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.011 [2024-12-09 05:41:40.878660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:15.011 [2024-12-09 05:41:40.878680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.878946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.878960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.012 [2024-12-09 05:41:40.879422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.012 [2024-12-09 05:41:40.879823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.012 [2024-12-09 05:41:40.879836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.879849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.879863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.879889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.879901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.879915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.879927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.879941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.879953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.879981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.879993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.013 [2024-12-09 05:41:40.880754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.013 [2024-12-09 05:41:40.880916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.013 [2024-12-09 05:41:40.880929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.880941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.880955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.880967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.880981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.880993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:15.014 [2024-12-09 05:41:40.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.014 [2024-12-09 05:41:40.881599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ec310 is same with the state(6) to be set 00:51:15.014 [2024-12-09 05:41:40.881628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.014 [2024-12-09 05:41:40.881638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.014 [2024-12-09 05:41:40.881647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25680 len:8 PRP1 0x0 PRP2 0x0 00:51:15.014 [2024-12-09 05:41:40.881659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.014 [2024-12-09 05:41:40.881681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.014 [2024-12-09 05:41:40.881690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26264 len:8 PRP1 0x0 PRP2 0x0 00:51:15.014 [2024-12-09 05:41:40.881702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.014 [2024-12-09 05:41:40.881723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.014 [2024-12-09 05:41:40.881732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26272 len:8 PRP1 0x0 PRP2 0x0 00:51:15.014 [2024-12-09 05:41:40.881744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.014 [2024-12-09 05:41:40.881765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.014 [2024-12-09 05:41:40.881774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26280 len:8 PRP1 0x0 PRP2 0x0 00:51:15.014 [2024-12-09 05:41:40.881789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.014 [2024-12-09 05:41:40.881810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.014 [2024-12-09 05:41:40.881825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26288 len:8 PRP1 0x0 PRP2 0x0 00:51:15.014 [2024-12-09 05:41:40.881838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.014 [2024-12-09 05:41:40.881862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.014 [2024-12-09 05:41:40.881871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26296 len:8 PRP1 0x0 PRP2 0x0 00:51:15.014 [2024-12-09 05:41:40.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.014 [2024-12-09 05:41:40.881895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.015 [2024-12-09 05:41:40.881903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.015 [2024-12-09 05:41:40.881912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26304 len:8 PRP1 0x0 PRP2 0x0 00:51:15.015 [2024-12-09 05:41:40.881924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.881936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.015 [2024-12-09 05:41:40.881945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.015 [2024-12-09 05:41:40.881954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26312 len:8 PRP1 0x0 PRP2 0x0 00:51:15.015 [2024-12-09 05:41:40.881966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.881978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:15.015 [2024-12-09 05:41:40.881987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:15.015 [2024-12-09 05:41:40.881996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26320 len:8 PRP1 0x0 PRP2 0x0 00:51:15.015 [2024-12-09 05:41:40.882008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.882177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:15.015 [2024-12-09 05:41:40.882204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.882219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:15.015 [2024-12-09 05:41:40.882232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.882245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:15.015 [2024-12-09 05:41:40.882258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.882271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:15.015 [2024-12-09 05:41:40.882284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.882298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:15.015 [2024-12-09 05:41:40.882311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:15.015 [2024-12-09 05:41:40.882339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225d1e0 is same with the state(6) to be set 00:51:15.015 [2024-12-09 05:41:40.883342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:51:15.015 [2024-12-09 05:41:40.883379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225d1e0 (9): Bad file descriptor 00:51:15.015 [2024-12-09 05:41:40.883713] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:15.015 [2024-12-09 05:41:40.883744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x225d1e0 with addr=10.0.0.3, port=4421 00:51:15.015 [2024-12-09 05:41:40.883760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225d1e0 is same with the state(6) to be set 00:51:15.015 [2024-12-09 05:41:40.883821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225d1e0 (9): Bad file descriptor 00:51:15.015 [2024-12-09 05:41:40.883856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:51:15.015 [2024-12-09 05:41:40.883872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:51:15.015 [2024-12-09 05:41:40.883885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:51:15.015 [2024-12-09 05:41:40.883899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:51:15.015 [2024-12-09 05:41:40.883912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:51:15.015 7090.06 IOPS, 27.70 MiB/s [2024-12-09T05:42:01.987Z] 7147.08 IOPS, 27.92 MiB/s [2024-12-09T05:42:01.987Z] 7212.47 IOPS, 28.17 MiB/s [2024-12-09T05:42:01.987Z] 7277.79 IOPS, 28.43 MiB/s [2024-12-09T05:42:01.987Z] 7338.25 IOPS, 28.67 MiB/s [2024-12-09T05:42:01.987Z] 7395.76 IOPS, 28.89 MiB/s [2024-12-09T05:42:01.987Z] 7449.00 IOPS, 29.10 MiB/s [2024-12-09T05:42:01.987Z] 7495.67 IOPS, 29.28 MiB/s [2024-12-09T05:42:01.987Z] 7543.14 IOPS, 29.47 MiB/s [2024-12-09T05:42:01.987Z] 7590.62 IOPS, 29.65 MiB/s [2024-12-09T05:42:01.987Z] [2024-12-09 05:41:50.930584] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:51:15.015 7636.98 IOPS, 29.83 MiB/s [2024-12-09T05:42:01.987Z] 7681.47 IOPS, 30.01 MiB/s [2024-12-09T05:42:01.987Z] 7723.94 IOPS, 30.17 MiB/s [2024-12-09T05:42:01.987Z] 7765.00 IOPS, 30.33 MiB/s [2024-12-09T05:42:01.987Z] 7796.10 IOPS, 30.45 MiB/s [2024-12-09T05:42:01.987Z] 7833.98 IOPS, 30.60 MiB/s [2024-12-09T05:42:01.987Z] 7870.87 IOPS, 30.75 MiB/s [2024-12-09T05:42:01.987Z] 7904.92 IOPS, 30.88 MiB/s [2024-12-09T05:42:01.987Z] 7938.31 IOPS, 31.01 MiB/s [2024-12-09T05:42:01.987Z] 7965.91 IOPS, 31.12 MiB/s [2024-12-09T05:42:01.987Z] Received shutdown signal, test time was about 55.560594 seconds 00:51:15.015 00:51:15.015 Latency(us) 00:51:15.015 [2024-12-09T05:42:01.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:15.015 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:51:15.015 Verification LBA range: start 0x0 length 0x4000 00:51:15.015 Nvme0n1 : 55.56 7975.77 31.16 0.00 0.00 16018.01 428.22 7015926.69 00:51:15.015 [2024-12-09T05:42:01.987Z] =================================================================================================================== 00:51:15.015 [2024-12-09T05:42:01.987Z] Total : 7975.77 31.16 0.00 0.00 16018.01 428.22 7015926.69 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:15.015 rmmod nvme_tcp 00:51:15.015 rmmod nvme_fabrics 00:51:15.015 rmmod nvme_keyring 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80402 ']' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80402 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80402 ']' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80402 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80402 00:51:15.015 killing process with pid 80402 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80402' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80402 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80402 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:15.015 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:15.273 05:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:51:15.273 00:51:15.273 real 1m1.231s 00:51:15.273 user 2m50.088s 00:51:15.273 sys 0m17.982s 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:15.273 ************************************ 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:51:15.273 END TEST nvmf_host_multipath 00:51:15.273 ************************************ 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:15.273 05:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:15.273 ************************************ 00:51:15.273 START TEST nvmf_timeout 00:51:15.273 ************************************ 00:51:15.274 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:51:15.274 * Looking for test storage... 00:51:15.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:15.274 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:51:15.274 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:51:15.274 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:51:15.533 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:51:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:15.534 --rc genhtml_branch_coverage=1 00:51:15.534 --rc genhtml_function_coverage=1 00:51:15.534 --rc genhtml_legend=1 00:51:15.534 --rc geninfo_all_blocks=1 00:51:15.534 --rc geninfo_unexecuted_blocks=1 00:51:15.534 00:51:15.534 ' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:51:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:15.534 --rc genhtml_branch_coverage=1 00:51:15.534 --rc genhtml_function_coverage=1 00:51:15.534 --rc genhtml_legend=1 00:51:15.534 --rc geninfo_all_blocks=1 00:51:15.534 --rc geninfo_unexecuted_blocks=1 00:51:15.534 00:51:15.534 ' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:51:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:15.534 --rc genhtml_branch_coverage=1 00:51:15.534 --rc genhtml_function_coverage=1 00:51:15.534 --rc genhtml_legend=1 00:51:15.534 --rc geninfo_all_blocks=1 00:51:15.534 --rc geninfo_unexecuted_blocks=1 00:51:15.534 00:51:15.534 ' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:51:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:15.534 --rc genhtml_branch_coverage=1 00:51:15.534 --rc genhtml_function_coverage=1 00:51:15.534 --rc genhtml_legend=1 00:51:15.534 --rc geninfo_all_blocks=1 00:51:15.534 --rc geninfo_unexecuted_blocks=1 00:51:15.534 00:51:15.534 ' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:15.534 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:51:15.534 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:51:15.535 Cannot find device "nvmf_init_br" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:51:15.535 Cannot find device "nvmf_init_br2" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:51:15.535 Cannot find device "nvmf_tgt_br" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:51:15.535 Cannot find device "nvmf_tgt_br2" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:51:15.535 Cannot find device "nvmf_init_br" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:51:15.535 Cannot find device "nvmf_init_br2" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:51:15.535 Cannot find device "nvmf_tgt_br" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:51:15.535 Cannot find device "nvmf_tgt_br2" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:51:15.535 Cannot find device "nvmf_br" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:51:15.535 Cannot find device "nvmf_init_if" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:51:15.535 Cannot find device "nvmf_init_if2" 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:15.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:15.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:15.535 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:51:15.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:15.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:51:15.794 00:51:15.794 --- 10.0.0.3 ping statistics --- 00:51:15.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:15.794 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:51:15.794 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:51:15.794 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:51:15.794 00:51:15.794 --- 10.0.0.4 ping statistics --- 00:51:15.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:15.794 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:15.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:15.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:51:15.794 00:51:15.794 --- 10.0.0.1 ping statistics --- 00:51:15.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:15.794 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:51:15.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:15.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:51:15.794 00:51:15.794 --- 10.0.0.2 ping statistics --- 00:51:15.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:15.794 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81622 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81622 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81622 ']' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:15.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:15.794 05:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:16.052 [2024-12-09 05:42:02.784544] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:51:16.052 [2024-12-09 05:42:02.784634] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:16.052 [2024-12-09 05:42:02.920346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:51:16.052 [2024-12-09 05:42:02.948609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:16.052 [2024-12-09 05:42:02.948647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:16.052 [2024-12-09 05:42:02.948657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:16.052 [2024-12-09 05:42:02.948663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:16.052 [2024-12-09 05:42:02.948669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:16.052 [2024-12-09 05:42:02.949483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:16.052 [2024-12-09 05:42:02.949494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:16.052 [2024-12-09 05:42:02.978699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:51:16.311 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:51:16.568 [2024-12-09 05:42:03.377699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:16.568 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:51:16.827 Malloc0 00:51:16.827 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:51:17.085 05:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:51:17.344 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:17.603 [2024-12-09 05:42:04.442023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81658 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81658 /var/tmp/bdevperf.sock 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81658 ']' 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:17.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:17.603 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:17.603 [2024-12-09 05:42:04.506286] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:51:17.603 [2024-12-09 05:42:04.506352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81658 ] 00:51:17.862 [2024-12-09 05:42:04.658545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:17.862 [2024-12-09 05:42:04.698407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:17.862 [2024-12-09 05:42:04.732990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:17.862 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:17.862 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:51:17.862 05:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:51:18.120 05:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:51:18.377 NVMe0n1 00:51:18.635 05:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81674 00:51:18.635 05:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:51:18.635 05:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:51:18.635 Running I/O for 10 seconds... 00:51:19.569 05:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:19.829 7851.00 IOPS, 30.67 MiB/s [2024-12-09T05:42:06.801Z] [2024-12-09 05:42:06.616415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.829 [2024-12-09 05:42:06.616631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.829 [2024-12-09 05:42:06.616641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.829 [2024-12-09 05:42:06.616650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.616989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.616997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:19.830 [2024-12-09 05:42:06.617402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.830 [2024-12-09 05:42:06.617561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.830 [2024-12-09 05:42:06.617570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.617990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.617998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.831 [2024-12-09 05:42:06.618370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.831 [2024-12-09 05:42:06.618381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.618668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.618678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.619153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.619431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.619621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.619790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.619960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.620140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.620309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.620488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.620665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.620804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.620970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.620989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.620998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.832 [2024-12-09 05:42:06.621388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.832 [2024-12-09 05:42:06.621399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.833 [2024-12-09 05:42:06.621408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.833 [2024-12-09 05:42:06.621418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:19.833 [2024-12-09 05:42:06.621427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.833 [2024-12-09 05:42:06.621437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9970 is same with the state(6) to be set 00:51:19.833 [2024-12-09 05:42:06.621449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:19.833 [2024-12-09 05:42:06.621456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:19.833 [2024-12-09 05:42:06.621464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72896 len:8 PRP1 0x0 PRP2 0x0 00:51:19.833 [2024-12-09 05:42:06.621474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:19.833 [2024-12-09 05:42:06.621766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:51:19.833 [2024-12-09 05:42:06.621847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2259e50 (9): Bad file descriptor 00:51:19.833 [2024-12-09 05:42:06.621944] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:19.833 [2024-12-09 05:42:06.621964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2259e50 with addr=10.0.0.3, port=4420 00:51:19.833 [2024-12-09 05:42:06.621975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2259e50 is same with the state(6) to be set 00:51:19.833 [2024-12-09 05:42:06.621992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2259e50 (9): Bad file descriptor 00:51:19.833 [2024-12-09 05:42:06.622007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:51:19.833 [2024-12-09 05:42:06.622016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:51:19.833 [2024-12-09 05:42:06.622026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:51:19.833 [2024-12-09 05:42:06.622036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:51:19.833 [2024-12-09 05:42:06.622046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:51:19.833 05:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:51:21.708 4501.50 IOPS, 17.58 MiB/s [2024-12-09T05:42:08.680Z] 3001.00 IOPS, 11.72 MiB/s [2024-12-09T05:42:08.680Z] [2024-12-09 05:42:08.622222] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:21.708 [2024-12-09 05:42:08.622273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2259e50 with addr=10.0.0.3, port=4420 00:51:21.708 [2024-12-09 05:42:08.622288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2259e50 is same with the state(6) to be set 00:51:21.708 [2024-12-09 05:42:08.622311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2259e50 (9): Bad file descriptor 00:51:21.708 [2024-12-09 05:42:08.622330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:51:21.708 [2024-12-09 05:42:08.622339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:51:21.708 [2024-12-09 05:42:08.622350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:51:21.708 [2024-12-09 05:42:08.622360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:51:21.708 [2024-12-09 05:42:08.622370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:51:21.708 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:51:21.708 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:51:21.708 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:51:21.967 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:51:21.967 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:51:21.967 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:51:21.967 05:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:51:22.226 05:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:51:22.226 05:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:51:23.729 2250.75 IOPS, 8.79 MiB/s [2024-12-09T05:42:10.701Z] 1800.60 IOPS, 7.03 MiB/s [2024-12-09T05:42:10.701Z] [2024-12-09 05:42:10.622632] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:23.729 [2024-12-09 05:42:10.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2259e50 with addr=10.0.0.3, port=4420 00:51:23.729 [2024-12-09 05:42:10.622711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2259e50 is same with the state(6) to be set 00:51:23.729 [2024-12-09 05:42:10.622734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2259e50 (9): Bad file descriptor 00:51:23.729 [2024-12-09 05:42:10.622751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:51:23.729 [2024-12-09 05:42:10.622760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:51:23.729 [2024-12-09 05:42:10.622769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:51:23.729 [2024-12-09 05:42:10.622778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:51:23.729 [2024-12-09 05:42:10.622788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:51:25.606 1500.50 IOPS, 5.86 MiB/s [2024-12-09T05:42:12.835Z] 1286.14 IOPS, 5.02 MiB/s [2024-12-09T05:42:12.835Z] [2024-12-09 05:42:12.622909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:51:25.863 [2024-12-09 05:42:12.622951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:51:25.863 [2024-12-09 05:42:12.622977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:51:25.863 [2024-12-09 05:42:12.622986] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:51:25.863 [2024-12-09 05:42:12.622996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:51:26.795 1125.38 IOPS, 4.40 MiB/s 00:51:26.795 Latency(us) 00:51:26.795 [2024-12-09T05:42:13.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:26.795 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:51:26.795 Verification LBA range: start 0x0 length 0x4000 00:51:26.795 NVMe0n1 : 8.15 1105.21 4.32 15.71 0.00 114000.90 3485.32 7015926.69 00:51:26.795 [2024-12-09T05:42:13.767Z] =================================================================================================================== 00:51:26.795 [2024-12-09T05:42:13.767Z] Total : 1105.21 4.32 15.71 0.00 114000.90 3485.32 7015926.69 00:51:26.795 { 00:51:26.795 "results": [ 00:51:26.795 { 00:51:26.795 "job": "NVMe0n1", 00:51:26.795 "core_mask": "0x4", 00:51:26.795 "workload": "verify", 00:51:26.795 "status": "finished", 00:51:26.795 "verify_range": { 00:51:26.795 "start": 0, 00:51:26.795 "length": 16384 00:51:26.795 }, 00:51:26.795 "queue_depth": 128, 00:51:26.795 "io_size": 4096, 00:51:26.795 "runtime": 8.145952, 00:51:26.795 "iops": 1105.211521010681, 00:51:26.795 "mibps": 4.317232503947973, 00:51:26.795 "io_failed": 128, 00:51:26.795 "io_timeout": 0, 00:51:26.795 "avg_latency_us": 114000.90080186378, 00:51:26.795 "min_latency_us": 3485.3236363636365, 00:51:26.795 "max_latency_us": 7015926.69090909 00:51:26.795 } 00:51:26.795 ], 00:51:26.795 "core_count": 1 00:51:26.795 } 00:51:27.359 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:51:27.359 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:51:27.359 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:51:27.616 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:51:27.616 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:51:27.616 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:51:27.616 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81674 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81658 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81658 ']' 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81658 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81658 00:51:27.874 killing process with pid 81658 00:51:27.874 Received shutdown signal, test time was about 9.263794 seconds 00:51:27.874 00:51:27.874 Latency(us) 00:51:27.874 [2024-12-09T05:42:14.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:27.874 [2024-12-09T05:42:14.846Z] =================================================================================================================== 00:51:27.874 [2024-12-09T05:42:14.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81658' 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81658 00:51:27.874 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81658 00:51:28.132 05:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:28.389 [2024-12-09 05:42:15.105560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:28.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81798 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81798 /var/tmp/bdevperf.sock 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81798 ']' 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:28.389 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:28.390 [2024-12-09 05:42:15.167397] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:51:28.390 [2024-12-09 05:42:15.168094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81798 ] 00:51:28.390 [2024-12-09 05:42:15.313913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:28.390 [2024-12-09 05:42:15.343237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:28.647 [2024-12-09 05:42:15.372131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:28.647 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:28.647 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:51:28.647 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:51:28.905 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:51:29.163 NVMe0n1 00:51:29.163 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:51:29.163 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81813 00:51:29.163 05:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:51:29.163 Running I/O for 10 seconds... 00:51:30.113 05:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:30.374 7844.00 IOPS, 30.64 MiB/s [2024-12-09T05:42:17.346Z] [2024-12-09 05:42:17.197159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.374 [2024-12-09 05:42:17.197465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.197913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa0a0 is same with the state(6) to be set 00:51:30.375 [2024-12-09 05:42:17.198754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.198985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.198996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.199004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.199014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.199023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.199033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.199041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.375 [2024-12-09 05:42:17.199055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.375 [2024-12-09 05:42:17.199064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.376 [2024-12-09 05:42:17.199853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.376 [2024-12-09 05:42:17.199862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.199872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.199880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.199890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.199898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.199908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.199917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.199926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.199935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.199945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.199974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.199991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.377 [2024-12-09 05:42:17.200875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.377 [2024-12-09 05:42:17.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.377 [2024-12-09 05:42:17.200909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.200927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.200937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.200948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.200968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.200993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.378 [2024-12-09 05:42:17.201253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:30.378 [2024-12-09 05:42:17.201272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a970 is same with the state(6) to be set 00:51:30.378 [2024-12-09 05:42:17.201302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.378 [2024-12-09 05:42:17.201878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.378 [2024-12-09 05:42:17.201886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:51:30.378 [2024-12-09 05:42:17.201895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.378 [2024-12-09 05:42:17.201904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.201911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 [2024-12-09 05:42:17.201919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.201927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.201952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.201959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 [2024-12-09 05:42:17.201968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.201982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.201997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.202010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 05:42:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:51:30.379 [2024-12-09 05:42:17.216472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.216518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.216540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 [2024-12-09 05:42:17.216547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.216555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.216585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 [2024-12-09 05:42:17.216624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.216632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.216649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 [2024-12-09 05:42:17.216657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.216665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:30.379 [2024-12-09 05:42:17.216685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:30.379 [2024-12-09 05:42:17.216692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:51:30.379 [2024-12-09 05:42:17.216700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:30.379 [2024-12-09 05:42:17.216948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:30.379 [2024-12-09 05:42:17.216980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.216995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:30.379 [2024-12-09 05:42:17.217009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.217022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:30.379 [2024-12-09 05:42:17.217031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:30.379 [2024-12-09 05:42:17.217040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:30.379 [2024-12-09 05:42:17.217250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:51:30.379 [2024-12-09 05:42:17.217286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:30.379 [2024-12-09 05:42:17.217394] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:30.379 [2024-12-09 05:42:17.217417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152ae50 with addr=10.0.0.3, port=4420 00:51:30.379 [2024-12-09 05:42:17.217428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:30.379 [2024-12-09 05:42:17.217446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:30.379 [2024-12-09 05:42:17.217462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:51:30.379 [2024-12-09 05:42:17.217471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:51:30.379 [2024-12-09 05:42:17.217480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:51:30.379 [2024-12-09 05:42:17.217491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:51:30.379 [2024-12-09 05:42:17.217507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:51:31.314 4443.50 IOPS, 17.36 MiB/s [2024-12-09T05:42:18.286Z] 05:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:31.314 [2024-12-09 05:42:18.217643] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:31.314 [2024-12-09 05:42:18.217717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152ae50 with addr=10.0.0.3, port=4420 00:51:31.314 [2024-12-09 05:42:18.217733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:31.314 [2024-12-09 05:42:18.217754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:31.314 [2024-12-09 05:42:18.217771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:51:31.314 [2024-12-09 05:42:18.217791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:51:31.314 [2024-12-09 05:42:18.217801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:51:31.314 [2024-12-09 05:42:18.217812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:51:31.314 [2024-12-09 05:42:18.217822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:51:31.573 [2024-12-09 05:42:18.468075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:31.573 05:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81813 00:51:32.400 2962.33 IOPS, 11.57 MiB/s [2024-12-09T05:42:19.372Z] [2024-12-09 05:42:19.230031] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:51:34.294 2221.75 IOPS, 8.68 MiB/s [2024-12-09T05:42:22.249Z] 3582.00 IOPS, 13.99 MiB/s [2024-12-09T05:42:23.187Z] 4774.33 IOPS, 18.65 MiB/s [2024-12-09T05:42:24.125Z] 5625.71 IOPS, 21.98 MiB/s [2024-12-09T05:42:25.502Z] 6266.50 IOPS, 24.48 MiB/s [2024-12-09T05:42:26.437Z] 6770.00 IOPS, 26.45 MiB/s [2024-12-09T05:42:26.437Z] 7165.40 IOPS, 27.99 MiB/s 00:51:39.465 Latency(us) 00:51:39.465 [2024-12-09T05:42:26.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:39.465 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:51:39.465 Verification LBA range: start 0x0 length 0x4000 00:51:39.465 NVMe0n1 : 10.01 7170.53 28.01 0.00 0.00 17826.47 1280.93 3050402.91 00:51:39.465 [2024-12-09T05:42:26.437Z] =================================================================================================================== 00:51:39.465 [2024-12-09T05:42:26.437Z] Total : 7170.53 28.01 0.00 0.00 17826.47 1280.93 3050402.91 00:51:39.465 { 00:51:39.465 "results": [ 00:51:39.465 { 00:51:39.465 "job": "NVMe0n1", 00:51:39.465 "core_mask": "0x4", 00:51:39.465 "workload": "verify", 00:51:39.465 "status": "finished", 00:51:39.465 "verify_range": { 00:51:39.465 "start": 0, 00:51:39.465 "length": 16384 00:51:39.465 }, 00:51:39.465 "queue_depth": 128, 00:51:39.465 "io_size": 4096, 00:51:39.465 "runtime": 10.009574, 00:51:39.465 "iops": 7170.534929858154, 00:51:39.465 "mibps": 28.009902069758414, 00:51:39.465 "io_failed": 0, 00:51:39.465 "io_timeout": 0, 00:51:39.465 "avg_latency_us": 17826.467643841654, 00:51:39.465 "min_latency_us": 1280.9309090909092, 00:51:39.465 "max_latency_us": 3050402.909090909 00:51:39.465 } 00:51:39.465 ], 00:51:39.465 "core_count": 1 00:51:39.465 } 00:51:39.465 05:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81913 00:51:39.465 05:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:51:39.465 05:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:51:39.465 Running I/O for 10 seconds... 00:51:40.402 05:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:40.402 7829.00 IOPS, 30.58 MiB/s [2024-12-09T05:42:27.374Z] [2024-12-09 05:42:27.355076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.402 [2024-12-09 05:42:27.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.402 [2024-12-09 05:42:27.355351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.355983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.355995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.403 [2024-12-09 05:42:27.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.403 [2024-12-09 05:42:27.356271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.356982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.356994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.404 [2024-12-09 05:42:27.357141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.404 [2024-12-09 05:42:27.357159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:40.405 [2024-12-09 05:42:27.357837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.357969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:40.405 [2024-12-09 05:42:27.357984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.358001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1588fd0 is same with the state(6) to be set 00:51:40.405 [2024-12-09 05:42:27.358019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:40.405 [2024-12-09 05:42:27.358032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:40.405 [2024-12-09 05:42:27.358044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71056 len:8 PRP1 0x0 PRP2 0x0 00:51:40.405 [2024-12-09 05:42:27.358057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:40.405 [2024-12-09 05:42:27.358380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:51:40.405 [2024-12-09 05:42:27.358478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:40.405 [2024-12-09 05:42:27.358624] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:40.406 [2024-12-09 05:42:27.358648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152ae50 with addr=10.0.0.3, port=4420 00:51:40.406 [2024-12-09 05:42:27.358670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:40.406 [2024-12-09 05:42:27.358698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:40.406 [2024-12-09 05:42:27.358718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:51:40.406 [2024-12-09 05:42:27.358727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:51:40.406 [2024-12-09 05:42:27.358736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:51:40.406 [2024-12-09 05:42:27.358746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:51:40.406 [2024-12-09 05:42:27.358756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:51:40.664 05:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:51:41.600 4426.50 IOPS, 17.29 MiB/s [2024-12-09T05:42:28.572Z] [2024-12-09 05:42:28.358856] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:41.600 [2024-12-09 05:42:28.358912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152ae50 with addr=10.0.0.3, port=4420 00:51:41.600 [2024-12-09 05:42:28.358925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:41.600 [2024-12-09 05:42:28.358944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:41.600 [2024-12-09 05:42:28.358960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:51:41.600 [2024-12-09 05:42:28.358968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:51:41.600 [2024-12-09 05:42:28.358977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:51:41.600 [2024-12-09 05:42:28.358987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:51:41.600 [2024-12-09 05:42:28.358997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:51:42.535 2951.00 IOPS, 11.53 MiB/s [2024-12-09T05:42:29.507Z] [2024-12-09 05:42:29.359083] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:42.535 [2024-12-09 05:42:29.359136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152ae50 with addr=10.0.0.3, port=4420 00:51:42.535 [2024-12-09 05:42:29.359148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:42.535 [2024-12-09 05:42:29.359166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:42.535 [2024-12-09 05:42:29.359182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:51:42.535 [2024-12-09 05:42:29.359190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:51:42.535 [2024-12-09 05:42:29.359199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:51:42.535 [2024-12-09 05:42:29.359208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:51:42.535 [2024-12-09 05:42:29.359218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:51:43.468 2213.25 IOPS, 8.65 MiB/s [2024-12-09T05:42:30.440Z] [2024-12-09 05:42:30.362200] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:43.468 [2024-12-09 05:42:30.362299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152ae50 with addr=10.0.0.3, port=4420 00:51:43.468 [2024-12-09 05:42:30.362313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae50 is same with the state(6) to be set 00:51:43.468 [2024-12-09 05:42:30.362576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ae50 (9): Bad file descriptor 00:51:43.468 [2024-12-09 05:42:30.362849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:51:43.468 [2024-12-09 05:42:30.362863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:51:43.468 [2024-12-09 05:42:30.362873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:51:43.468 [2024-12-09 05:42:30.362883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:51:43.468 [2024-12-09 05:42:30.362892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:51:43.468 05:42:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:43.727 [2024-12-09 05:42:30.648707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:43.727 05:42:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81913 00:51:44.551 1770.60 IOPS, 6.92 MiB/s [2024-12-09T05:42:31.523Z] [2024-12-09 05:42:31.394106] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:51:46.424 2960.67 IOPS, 11.57 MiB/s [2024-12-09T05:42:34.332Z] 4051.43 IOPS, 15.83 MiB/s [2024-12-09T05:42:35.275Z] 4874.75 IOPS, 19.04 MiB/s [2024-12-09T05:42:36.655Z] 5517.78 IOPS, 21.55 MiB/s [2024-12-09T05:42:36.655Z] 6028.40 IOPS, 23.55 MiB/s 00:51:49.683 Latency(us) 00:51:49.683 [2024-12-09T05:42:36.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:49.683 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:51:49.683 Verification LBA range: start 0x0 length 0x4000 00:51:49.683 NVMe0n1 : 10.01 6035.27 23.58 4154.12 0.00 12538.58 714.94 3019898.88 00:51:49.683 [2024-12-09T05:42:36.655Z] =================================================================================================================== 00:51:49.683 [2024-12-09T05:42:36.655Z] Total : 6035.27 23.58 4154.12 0.00 12538.58 0.00 3019898.88 00:51:49.683 { 00:51:49.683 "results": [ 00:51:49.683 { 00:51:49.683 "job": "NVMe0n1", 00:51:49.683 "core_mask": "0x4", 00:51:49.683 "workload": "verify", 00:51:49.683 "status": "finished", 00:51:49.683 "verify_range": { 00:51:49.683 "start": 0, 00:51:49.683 "length": 16384 00:51:49.683 }, 00:51:49.683 "queue_depth": 128, 00:51:49.683 "io_size": 4096, 00:51:49.683 "runtime": 10.009824, 00:51:49.683 "iops": 6035.270949818898, 00:51:49.683 "mibps": 23.57527714773007, 00:51:49.683 "io_failed": 41582, 00:51:49.683 "io_timeout": 0, 00:51:49.683 "avg_latency_us": 12538.584565010062, 00:51:49.683 "min_latency_us": 714.9381818181819, 00:51:49.683 "max_latency_us": 3019898.88 00:51:49.683 } 00:51:49.683 ], 00:51:49.683 "core_count": 1 00:51:49.683 } 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81798 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81798 ']' 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81798 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81798 00:51:49.683 killing process with pid 81798 00:51:49.683 Received shutdown signal, test time was about 10.000000 seconds 00:51:49.683 00:51:49.683 Latency(us) 00:51:49.683 [2024-12-09T05:42:36.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:49.683 [2024-12-09T05:42:36.655Z] =================================================================================================================== 00:51:49.683 [2024-12-09T05:42:36.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81798' 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81798 00:51:49.683 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81798 00:51:49.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82033 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82033 /var/tmp/bdevperf.sock 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82033 ']' 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:49.684 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:49.684 [2024-12-09 05:42:36.495466] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:51:49.684 [2024-12-09 05:42:36.495553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82033 ] 00:51:49.684 [2024-12-09 05:42:36.635422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:49.942 [2024-12-09 05:42:36.665390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:49.942 [2024-12-09 05:42:36.693594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:49.942 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:49.942 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:51:49.942 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82036 00:51:49.942 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82033 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:51:49.942 05:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:51:50.209 05:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:51:50.468 NVMe0n1 00:51:50.468 05:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82076 00:51:50.468 05:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:51:50.468 05:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:51:50.726 Running I/O for 10 seconds... 00:51:51.664 05:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:51.664 17145.00 IOPS, 66.97 MiB/s [2024-12-09T05:42:38.636Z] [2024-12-09 05:42:38.600213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.600988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.600999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.664 [2024-12-09 05:42:38.601008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.664 [2024-12-09 05:42:38.601018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.665 [2024-12-09 05:42:38.601930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.665 [2024-12-09 05:42:38.601938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.601948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.601957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.601968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.601976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.601986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.601994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.666 [2024-12-09 05:42:38.602866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.666 [2024-12-09 05:42:38.602877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.602891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.602908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.602925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.602942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.602961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.602974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.602984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.602995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:51.667 [2024-12-09 05:42:38.603257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e20 is same with the state(6) to be set 00:51:51.667 [2024-12-09 05:42:38.603279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:51:51.667 [2024-12-09 05:42:38.603290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:51:51.667 [2024-12-09 05:42:38.603298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:51:51.667 [2024-12-09 05:42:38.603306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:51.667 [2024-12-09 05:42:38.603473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:51.667 [2024-12-09 05:42:38.603492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:51.667 [2024-12-09 05:42:38.603510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:51.667 [2024-12-09 05:42:38.603527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:51.667 [2024-12-09 05:42:38.603552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055e50 is same with the state(6) to be set 00:51:51.667 [2024-12-09 05:42:38.603978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:51:51.667 [2024-12-09 05:42:38.604017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055e50 (9): Bad file descriptor 00:51:51.667 [2024-12-09 05:42:38.604117] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:51.667 [2024-12-09 05:42:38.604139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2055e50 with addr=10.0.0.3, port=4420 00:51:51.667 [2024-12-09 05:42:38.604150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055e50 is same with the state(6) to be set 00:51:51.667 [2024-12-09 05:42:38.604167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055e50 (9): Bad file descriptor 00:51:51.667 [2024-12-09 05:42:38.604189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:51:51.667 [2024-12-09 05:42:38.604205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:51:51.667 [2024-12-09 05:42:38.604221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:51:51.667 [2024-12-09 05:42:38.604235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:51:51.667 [2024-12-09 05:42:38.604245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:51:51.667 05:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82076 00:51:53.538 9715.50 IOPS, 37.95 MiB/s [2024-12-09T05:42:40.768Z] 6477.00 IOPS, 25.30 MiB/s [2024-12-09T05:42:40.768Z] [2024-12-09 05:42:40.604412] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:53.796 [2024-12-09 05:42:40.604478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2055e50 with addr=10.0.0.3, port=4420 00:51:53.796 [2024-12-09 05:42:40.604493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055e50 is same with the state(6) to be set 00:51:53.796 [2024-12-09 05:42:40.604516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055e50 (9): Bad file descriptor 00:51:53.796 [2024-12-09 05:42:40.604534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:51:53.796 [2024-12-09 05:42:40.604542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:51:53.796 [2024-12-09 05:42:40.604552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:51:53.796 [2024-12-09 05:42:40.604574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:51:53.796 [2024-12-09 05:42:40.604586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:51:55.666 4857.75 IOPS, 18.98 MiB/s [2024-12-09T05:42:42.638Z] 3886.20 IOPS, 15.18 MiB/s [2024-12-09T05:42:42.638Z] [2024-12-09 05:42:42.604733] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:51:55.666 [2024-12-09 05:42:42.604796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2055e50 with addr=10.0.0.3, port=4420 00:51:55.666 [2024-12-09 05:42:42.604809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055e50 is same with the state(6) to be set 00:51:55.666 [2024-12-09 05:42:42.604847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055e50 (9): Bad file descriptor 00:51:55.666 [2024-12-09 05:42:42.604865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:51:55.666 [2024-12-09 05:42:42.604874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:51:55.666 [2024-12-09 05:42:42.604884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:51:55.666 [2024-12-09 05:42:42.604894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:51:55.666 [2024-12-09 05:42:42.604905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:51:57.555 3238.50 IOPS, 12.65 MiB/s [2024-12-09T05:42:44.785Z] 2775.86 IOPS, 10.84 MiB/s [2024-12-09T05:42:44.785Z] [2024-12-09 05:42:44.605074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:51:57.813 [2024-12-09 05:42:44.605128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:51:57.813 [2024-12-09 05:42:44.605139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:51:57.813 [2024-12-09 05:42:44.605149] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:51:57.813 [2024-12-09 05:42:44.605159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:51:58.748 2428.88 IOPS, 9.49 MiB/s 00:51:58.748 Latency(us) 00:51:58.748 [2024-12-09T05:42:45.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:58.748 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:51:58.748 NVMe0n1 : 8.14 2386.50 9.32 15.72 0.00 53238.85 7030.23 7015926.69 00:51:58.748 [2024-12-09T05:42:45.720Z] =================================================================================================================== 00:51:58.748 [2024-12-09T05:42:45.720Z] Total : 2386.50 9.32 15.72 0.00 53238.85 7030.23 7015926.69 00:51:58.748 { 00:51:58.748 "results": [ 00:51:58.748 { 00:51:58.748 "job": "NVMe0n1", 00:51:58.748 "core_mask": "0x4", 00:51:58.748 "workload": "randread", 00:51:58.748 "status": "finished", 00:51:58.748 "queue_depth": 128, 00:51:58.748 "io_size": 4096, 00:51:58.748 "runtime": 8.142057, 00:51:58.748 "iops": 2386.4976626913813, 00:51:58.748 "mibps": 9.322256494888208, 00:51:58.748 "io_failed": 128, 00:51:58.748 "io_timeout": 0, 00:51:58.748 "avg_latency_us": 53238.85035208159, 00:51:58.748 "min_latency_us": 7030.225454545454, 00:51:58.748 "max_latency_us": 7015926.69090909 00:51:58.748 } 00:51:58.748 ], 00:51:58.748 "core_count": 1 00:51:58.748 } 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:51:58.748 Attaching 5 probes... 00:51:58.748 1345.253774: reset bdev controller NVMe0 00:51:58.748 1345.352760: reconnect bdev controller NVMe0 00:51:58.748 3345.575922: reconnect delay bdev controller NVMe0 00:51:58.748 3345.610359: reconnect bdev controller NVMe0 00:51:58.748 5345.923624: reconnect delay bdev controller NVMe0 00:51:58.748 5345.939949: reconnect bdev controller NVMe0 00:51:58.748 7346.336781: reconnect delay bdev controller NVMe0 00:51:58.748 7346.352720: reconnect bdev controller NVMe0 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82036 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82033 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82033 ']' 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82033 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82033 00:51:58.748 killing process with pid 82033 00:51:58.748 Received shutdown signal, test time was about 8.206688 seconds 00:51:58.748 00:51:58.748 Latency(us) 00:51:58.748 [2024-12-09T05:42:45.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:58.748 [2024-12-09T05:42:45.720Z] =================================================================================================================== 00:51:58.748 [2024-12-09T05:42:45.720Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82033' 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82033 00:51:58.748 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82033 00:51:59.006 05:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:59.264 rmmod nvme_tcp 00:51:59.264 rmmod nvme_fabrics 00:51:59.264 rmmod nvme_keyring 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81622 ']' 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81622 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81622 ']' 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81622 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:59.264 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81622 00:51:59.525 killing process with pid 81622 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81622' 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81622 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81622 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:59.525 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:59.783 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:51:59.783 00:51:59.784 real 0m44.525s 00:51:59.784 user 2m10.529s 00:51:59.784 sys 0m5.252s 00:51:59.784 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:59.784 05:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:51:59.784 ************************************ 00:51:59.784 END TEST nvmf_timeout 00:51:59.784 ************************************ 00:51:59.784 05:42:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:51:59.784 05:42:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:51:59.784 00:51:59.784 real 4m54.160s 00:51:59.784 user 12m52.103s 00:51:59.784 sys 1m4.940s 00:51:59.784 05:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:59.784 ************************************ 00:51:59.784 END TEST nvmf_host 00:51:59.784 05:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:59.784 ************************************ 00:52:00.044 05:42:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:52:00.044 05:42:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:52:00.044 00:52:00.044 real 12m29.229s 00:52:00.044 user 30m9.422s 00:52:00.044 sys 3m0.741s 00:52:00.044 05:42:46 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:00.044 ************************************ 00:52:00.044 05:42:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:00.044 END TEST nvmf_tcp 00:52:00.044 ************************************ 00:52:00.044 05:42:46 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:52:00.044 05:42:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:52:00.044 05:42:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:00.044 05:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:00.044 05:42:46 -- common/autotest_common.sh@10 -- # set +x 00:52:00.044 ************************************ 00:52:00.044 START TEST nvmf_dif 00:52:00.044 ************************************ 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:52:00.044 * Looking for test storage... 00:52:00.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:52:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.044 --rc genhtml_branch_coverage=1 00:52:00.044 --rc genhtml_function_coverage=1 00:52:00.044 --rc genhtml_legend=1 00:52:00.044 --rc geninfo_all_blocks=1 00:52:00.044 --rc geninfo_unexecuted_blocks=1 00:52:00.044 00:52:00.044 ' 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:52:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.044 --rc genhtml_branch_coverage=1 00:52:00.044 --rc genhtml_function_coverage=1 00:52:00.044 --rc genhtml_legend=1 00:52:00.044 --rc geninfo_all_blocks=1 00:52:00.044 --rc geninfo_unexecuted_blocks=1 00:52:00.044 00:52:00.044 ' 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:52:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.044 --rc genhtml_branch_coverage=1 00:52:00.044 --rc genhtml_function_coverage=1 00:52:00.044 --rc genhtml_legend=1 00:52:00.044 --rc geninfo_all_blocks=1 00:52:00.044 --rc geninfo_unexecuted_blocks=1 00:52:00.044 00:52:00.044 ' 00:52:00.044 05:42:46 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:52:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.044 --rc genhtml_branch_coverage=1 00:52:00.044 --rc genhtml_function_coverage=1 00:52:00.044 --rc genhtml_legend=1 00:52:00.044 --rc geninfo_all_blocks=1 00:52:00.044 --rc geninfo_unexecuted_blocks=1 00:52:00.044 00:52:00.044 ' 00:52:00.044 05:42:46 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:00.044 05:42:46 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:00.044 05:42:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:00.044 05:42:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:00.044 05:42:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:00.044 05:42:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:52:00.044 05:42:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:00.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:00.044 05:42:46 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:00.044 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:52:00.044 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:52:00.045 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:52:00.045 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:52:00.045 05:42:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:00.045 05:42:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:52:00.045 05:42:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:52:00.045 05:42:47 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:52:00.304 Cannot find device "nvmf_init_br" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@162 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:52:00.304 Cannot find device "nvmf_init_br2" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@163 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:52:00.304 Cannot find device "nvmf_tgt_br" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@164 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:52:00.304 Cannot find device "nvmf_tgt_br2" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@165 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:52:00.304 Cannot find device "nvmf_init_br" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@166 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:52:00.304 Cannot find device "nvmf_init_br2" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@167 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:52:00.304 Cannot find device "nvmf_tgt_br" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@168 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:52:00.304 Cannot find device "nvmf_tgt_br2" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@169 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:52:00.304 Cannot find device "nvmf_br" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@170 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:52:00.304 Cannot find device "nvmf_init_if" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@171 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:52:00.304 Cannot find device "nvmf_init_if2" 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@172 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:00.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@173 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:00.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@174 -- # true 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:52:00.304 05:42:47 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:52:00.305 05:42:47 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:52:00.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:52:00.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:52:00.564 00:52:00.564 --- 10.0.0.3 ping statistics --- 00:52:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:00.564 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:52:00.564 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:52:00.564 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:52:00.564 00:52:00.564 --- 10.0.0.4 ping statistics --- 00:52:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:00.564 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:52:00.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:00.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:52:00.564 00:52:00.564 --- 10.0.0.1 ping statistics --- 00:52:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:00.564 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:52:00.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:00.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:52:00.564 00:52:00.564 --- 10.0.0.2 ping statistics --- 00:52:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:00.564 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:52:00.564 05:42:47 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:52:00.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:00.825 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:52:00.825 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:52:00.825 05:42:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:52:00.825 05:42:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:00.825 05:42:47 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:00.825 05:42:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:52:00.825 05:42:47 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82566 00:52:01.085 05:42:47 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82566 00:52:01.085 05:42:47 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82566 ']' 00:52:01.085 05:42:47 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:01.085 05:42:47 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:01.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:01.085 05:42:47 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:01.085 05:42:47 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:01.085 05:42:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:01.085 [2024-12-09 05:42:47.854612] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:52:01.085 [2024-12-09 05:42:47.854720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:01.085 [2024-12-09 05:42:48.011685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:01.085 [2024-12-09 05:42:48.049742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:01.085 [2024-12-09 05:42:48.049809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:01.085 [2024-12-09 05:42:48.049824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:01.085 [2024-12-09 05:42:48.049834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:01.085 [2024-12-09 05:42:48.049844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:01.085 [2024-12-09 05:42:48.050367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:01.345 [2024-12-09 05:42:48.086875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:52:01.345 05:42:48 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 05:42:48 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:01.345 05:42:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:52:01.345 05:42:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 [2024-12-09 05:42:48.186972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.345 05:42:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 ************************************ 00:52:01.345 START TEST fio_dif_1_default 00:52:01.345 ************************************ 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 bdev_null0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:01.345 [2024-12-09 05:42:48.235138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:01.345 { 00:52:01.345 "params": { 00:52:01.345 "name": "Nvme$subsystem", 00:52:01.345 "trtype": "$TEST_TRANSPORT", 00:52:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:01.345 "adrfam": "ipv4", 00:52:01.345 "trsvcid": "$NVMF_PORT", 00:52:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:01.345 "hdgst": ${hdgst:-false}, 00:52:01.345 "ddgst": ${ddgst:-false} 00:52:01.345 }, 00:52:01.345 "method": "bdev_nvme_attach_controller" 00:52:01.345 } 00:52:01.345 EOF 00:52:01.345 )") 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:52:01.345 05:42:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:52:01.345 "params": { 00:52:01.345 "name": "Nvme0", 00:52:01.345 "trtype": "tcp", 00:52:01.345 "traddr": "10.0.0.3", 00:52:01.345 "adrfam": "ipv4", 00:52:01.345 "trsvcid": "4420", 00:52:01.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:01.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:52:01.346 "hdgst": false, 00:52:01.346 "ddgst": false 00:52:01.346 }, 00:52:01.346 "method": "bdev_nvme_attach_controller" 00:52:01.346 }' 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:01.346 05:42:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:01.605 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:52:01.605 fio-3.35 00:52:01.605 Starting 1 thread 00:52:13.887 00:52:13.887 filename0: (groupid=0, jobs=1): err= 0: pid=82625: Mon Dec 9 05:42:58 2024 00:52:13.887 read: IOPS=9945, BW=38.8MiB/s (40.7MB/s)(389MiB/10001msec) 00:52:13.887 slat (usec): min=5, max=417, avg= 7.55, stdev= 4.34 00:52:13.887 clat (usec): min=318, max=3231, avg=379.93, stdev=47.56 00:52:13.887 lat (usec): min=324, max=3257, avg=387.49, stdev=48.45 00:52:13.887 clat percentiles (usec): 00:52:13.887 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:52:13.887 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:52:13.887 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 445], 00:52:13.887 | 99.00th=[ 523], 99.50th=[ 578], 99.90th=[ 775], 99.95th=[ 857], 00:52:13.887 | 99.99th=[ 1188] 00:52:13.887 bw ( KiB/s): min=37344, max=41184, per=100.00%, avg=39811.37, stdev=978.48, samples=19 00:52:13.887 iops : min= 9336, max=10296, avg=9952.84, stdev=244.62, samples=19 00:52:13.887 lat (usec) : 500=98.51%, 750=1.37%, 1000=0.10% 00:52:13.887 lat (msec) : 2=0.02%, 4=0.01% 00:52:13.887 cpu : usr=84.45%, sys=13.53%, ctx=62, majf=0, minf=9 00:52:13.887 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:13.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:13.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:13.887 issued rwts: total=99464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:13.887 latency : target=0, window=0, percentile=100.00%, depth=4 00:52:13.887 00:52:13.887 Run status group 0 (all jobs): 00:52:13.887 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=389MiB (407MB), run=10001-10001msec 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 00:52:13.887 real 0m11.014s 00:52:13.887 user 0m9.122s 00:52:13.887 sys 0m1.600s 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:13.887 ************************************ 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 END TEST fio_dif_1_default 00:52:13.887 ************************************ 00:52:13.887 05:42:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:52:13.887 05:42:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:13.887 05:42:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 ************************************ 00:52:13.887 START TEST fio_dif_1_multi_subsystems 00:52:13.887 ************************************ 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 bdev_null0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 [2024-12-09 05:42:59.307741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 bdev_null1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.887 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:13.888 { 00:52:13.888 "params": { 00:52:13.888 "name": "Nvme$subsystem", 00:52:13.888 "trtype": "$TEST_TRANSPORT", 00:52:13.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:13.888 "adrfam": "ipv4", 00:52:13.888 "trsvcid": "$NVMF_PORT", 00:52:13.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:13.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:13.888 "hdgst": ${hdgst:-false}, 00:52:13.888 "ddgst": ${ddgst:-false} 00:52:13.888 }, 00:52:13.888 "method": "bdev_nvme_attach_controller" 00:52:13.888 } 00:52:13.888 EOF 00:52:13.888 )") 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:13.888 { 00:52:13.888 "params": { 00:52:13.888 "name": "Nvme$subsystem", 00:52:13.888 "trtype": "$TEST_TRANSPORT", 00:52:13.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:13.888 "adrfam": "ipv4", 00:52:13.888 "trsvcid": "$NVMF_PORT", 00:52:13.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:13.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:13.888 "hdgst": ${hdgst:-false}, 00:52:13.888 "ddgst": ${ddgst:-false} 00:52:13.888 }, 00:52:13.888 "method": "bdev_nvme_attach_controller" 00:52:13.888 } 00:52:13.888 EOF 00:52:13.888 )") 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:52:13.888 "params": { 00:52:13.888 "name": "Nvme0", 00:52:13.888 "trtype": "tcp", 00:52:13.888 "traddr": "10.0.0.3", 00:52:13.888 "adrfam": "ipv4", 00:52:13.888 "trsvcid": "4420", 00:52:13.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:13.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:52:13.888 "hdgst": false, 00:52:13.888 "ddgst": false 00:52:13.888 }, 00:52:13.888 "method": "bdev_nvme_attach_controller" 00:52:13.888 },{ 00:52:13.888 "params": { 00:52:13.888 "name": "Nvme1", 00:52:13.888 "trtype": "tcp", 00:52:13.888 "traddr": "10.0.0.3", 00:52:13.888 "adrfam": "ipv4", 00:52:13.888 "trsvcid": "4420", 00:52:13.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:13.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:13.888 "hdgst": false, 00:52:13.888 "ddgst": false 00:52:13.888 }, 00:52:13.888 "method": "bdev_nvme_attach_controller" 00:52:13.888 }' 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:13.888 05:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:13.888 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:52:13.888 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:52:13.888 fio-3.35 00:52:13.888 Starting 2 threads 00:52:23.852 00:52:23.852 filename0: (groupid=0, jobs=1): err= 0: pid=82785: Mon Dec 9 05:43:10 2024 00:52:23.852 read: IOPS=5237, BW=20.5MiB/s (21.5MB/s)(205MiB/10001msec) 00:52:23.852 slat (nsec): min=6354, max=86295, avg=12414.41, stdev=4495.06 00:52:23.852 clat (usec): min=572, max=1374, avg=730.28, stdev=59.05 00:52:23.852 lat (usec): min=579, max=1404, avg=742.69, stdev=59.91 00:52:23.852 clat percentiles (usec): 00:52:23.852 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:52:23.852 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 734], 00:52:23.852 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 840], 00:52:23.852 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1012], 99.95th=[ 1156], 00:52:23.852 | 99.99th=[ 1319] 00:52:23.852 bw ( KiB/s): min=20608, max=21312, per=50.07%, avg=20981.89, stdev=202.68, samples=19 00:52:23.852 iops : min= 5160, max= 5328, avg=5245.89, stdev=49.88, samples=19 00:52:23.852 lat (usec) : 750=68.94%, 1000=30.95% 00:52:23.852 lat (msec) : 2=0.11% 00:52:23.852 cpu : usr=89.75%, sys=8.99%, ctx=12, majf=0, minf=0 00:52:23.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:23.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:23.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:23.852 issued rwts: total=52384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:23.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:52:23.852 filename1: (groupid=0, jobs=1): err= 0: pid=82786: Mon Dec 9 05:43:10 2024 00:52:23.852 read: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(205MiB/10001msec) 00:52:23.852 slat (nsec): min=6300, max=68452, avg=12679.74, stdev=4542.47 00:52:23.852 clat (usec): min=362, max=1322, avg=728.41, stdev=54.59 00:52:23.852 lat (usec): min=369, max=1336, avg=741.09, stdev=55.22 00:52:23.852 clat percentiles (usec): 00:52:23.852 | 1.00th=[ 644], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 685], 00:52:23.852 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 734], 00:52:23.852 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 832], 00:52:23.852 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 1020], 99.95th=[ 1188], 00:52:23.852 | 99.99th=[ 1270] 00:52:23.852 bw ( KiB/s): min=20608, max=21312, per=50.07%, avg=20983.58, stdev=203.18, samples=19 00:52:23.852 iops : min= 5152, max= 5328, avg=5245.89, stdev=50.80, samples=19 00:52:23.852 lat (usec) : 500=0.01%, 750=71.75%, 1000=28.13% 00:52:23.852 lat (msec) : 2=0.12% 00:52:23.852 cpu : usr=90.42%, sys=8.32%, ctx=12, majf=0, minf=0 00:52:23.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:23.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:23.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:23.852 issued rwts: total=52388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:23.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:52:23.852 00:52:23.852 Run status group 0 (all jobs): 00:52:23.852 READ: bw=40.9MiB/s (42.9MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=409MiB (429MB), run=10001-10001msec 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 00:52:23.852 real 0m11.128s 00:52:23.852 user 0m18.800s 00:52:23.852 sys 0m1.994s 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 ************************************ 00:52:23.852 END TEST fio_dif_1_multi_subsystems 00:52:23.852 ************************************ 00:52:23.852 05:43:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:52:23.852 05:43:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:23.852 05:43:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 ************************************ 00:52:23.852 START TEST fio_dif_rand_params 00:52:23.852 ************************************ 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 bdev_null0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:23.852 [2024-12-09 05:43:10.491216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:23.852 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:23.852 { 00:52:23.852 "params": { 00:52:23.852 "name": "Nvme$subsystem", 00:52:23.852 "trtype": "$TEST_TRANSPORT", 00:52:23.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:23.852 "adrfam": "ipv4", 00:52:23.852 "trsvcid": "$NVMF_PORT", 00:52:23.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:23.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:23.852 "hdgst": ${hdgst:-false}, 00:52:23.852 "ddgst": ${ddgst:-false} 00:52:23.852 }, 00:52:23.852 "method": "bdev_nvme_attach_controller" 00:52:23.852 } 00:52:23.852 EOF 00:52:23.852 )") 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:52:23.853 "params": { 00:52:23.853 "name": "Nvme0", 00:52:23.853 "trtype": "tcp", 00:52:23.853 "traddr": "10.0.0.3", 00:52:23.853 "adrfam": "ipv4", 00:52:23.853 "trsvcid": "4420", 00:52:23.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:23.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:52:23.853 "hdgst": false, 00:52:23.853 "ddgst": false 00:52:23.853 }, 00:52:23.853 "method": "bdev_nvme_attach_controller" 00:52:23.853 }' 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:23.853 05:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:23.853 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:52:23.853 ... 00:52:23.853 fio-3.35 00:52:23.853 Starting 3 threads 00:52:30.470 00:52:30.470 filename0: (groupid=0, jobs=1): err= 0: pid=82941: Mon Dec 9 05:43:16 2024 00:52:30.470 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5003msec) 00:52:30.470 slat (nsec): min=6585, max=61492, avg=13670.86, stdev=5698.08 00:52:30.470 clat (usec): min=8214, max=13042, avg=10940.56, stdev=398.34 00:52:30.470 lat (usec): min=8220, max=13053, avg=10954.23, stdev=398.64 00:52:30.470 clat percentiles (usec): 00:52:30.470 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:52:30.470 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:52:30.470 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:52:30.470 | 99.00th=[12518], 99.50th=[12518], 99.90th=[13042], 99.95th=[13042], 00:52:30.470 | 99.99th=[13042] 00:52:30.470 bw ( KiB/s): min=33792, max=36096, per=33.42%, avg=35072.00, stdev=858.65, samples=9 00:52:30.470 iops : min= 264, max= 282, avg=274.00, stdev= 6.71, samples=9 00:52:30.470 lat (msec) : 10=0.22%, 20=99.78% 00:52:30.470 cpu : usr=90.90%, sys=8.58%, ctx=8, majf=0, minf=0 00:52:30.470 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:30.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:30.470 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:30.470 latency : target=0, window=0, percentile=100.00%, depth=3 00:52:30.470 filename0: (groupid=0, jobs=1): err= 0: pid=82942: Mon Dec 9 05:43:16 2024 00:52:30.470 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5005msec) 00:52:30.470 slat (nsec): min=6873, max=55537, avg=14749.59, stdev=5008.41 00:52:30.470 clat (usec): min=7790, max=13184, avg=10943.36, stdev=411.19 00:52:30.470 lat (usec): min=7802, max=13210, avg=10958.11, stdev=411.52 00:52:30.470 clat percentiles (usec): 00:52:30.470 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:52:30.470 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:52:30.470 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:52:30.470 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13173], 99.95th=[13173], 00:52:30.470 | 99.99th=[13173] 00:52:30.470 bw ( KiB/s): min=34560, max=35328, per=33.29%, avg=34944.00, stdev=404.77, samples=10 00:52:30.470 iops : min= 270, max= 276, avg=273.00, stdev= 3.16, samples=10 00:52:30.470 lat (msec) : 10=0.22%, 20=99.78% 00:52:30.470 cpu : usr=91.35%, sys=8.11%, ctx=8, majf=0, minf=0 00:52:30.470 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:30.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:30.470 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:30.470 latency : target=0, window=0, percentile=100.00%, depth=3 00:52:30.470 filename0: (groupid=0, jobs=1): err= 0: pid=82943: Mon Dec 9 05:43:16 2024 00:52:30.470 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5005msec) 00:52:30.470 slat (nsec): min=6712, max=63307, avg=15097.58, stdev=5494.87 00:52:30.470 clat (usec): min=7784, max=13030, avg=10940.94, stdev=407.41 00:52:30.470 lat (usec): min=7797, max=13050, avg=10956.04, stdev=407.73 00:52:30.470 clat percentiles (usec): 00:52:30.470 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:52:30.470 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:52:30.470 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:52:30.470 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13042], 99.95th=[13042], 00:52:30.470 | 99.99th=[13042] 00:52:30.470 bw ( KiB/s): min=34560, max=35328, per=33.29%, avg=34944.00, stdev=404.77, samples=10 00:52:30.470 iops : min= 270, max= 276, avg=273.00, stdev= 3.16, samples=10 00:52:30.470 lat (msec) : 10=0.22%, 20=99.78% 00:52:30.470 cpu : usr=90.91%, sys=8.55%, ctx=35, majf=0, minf=0 00:52:30.470 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:30.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:30.470 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:30.470 latency : target=0, window=0, percentile=100.00%, depth=3 00:52:30.470 00:52:30.470 Run status group 0 (all jobs): 00:52:30.470 READ: bw=102MiB/s (107MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=513MiB (538MB), run=5003-5005msec 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:52:30.470 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 bdev_null0 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 [2024-12-09 05:43:16.477638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 bdev_null1 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 bdev_null2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:30.471 { 00:52:30.471 "params": { 00:52:30.471 "name": "Nvme$subsystem", 00:52:30.471 "trtype": "$TEST_TRANSPORT", 00:52:30.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:30.471 "adrfam": "ipv4", 00:52:30.471 "trsvcid": "$NVMF_PORT", 00:52:30.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:30.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:30.471 "hdgst": ${hdgst:-false}, 00:52:30.471 "ddgst": ${ddgst:-false} 00:52:30.471 }, 00:52:30.471 "method": "bdev_nvme_attach_controller" 00:52:30.471 } 00:52:30.471 EOF 00:52:30.471 )") 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:30.471 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:30.472 { 00:52:30.472 "params": { 00:52:30.472 "name": "Nvme$subsystem", 00:52:30.472 "trtype": "$TEST_TRANSPORT", 00:52:30.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:30.472 "adrfam": "ipv4", 00:52:30.472 "trsvcid": "$NVMF_PORT", 00:52:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:30.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:30.472 "hdgst": ${hdgst:-false}, 00:52:30.472 "ddgst": ${ddgst:-false} 00:52:30.472 }, 00:52:30.472 "method": "bdev_nvme_attach_controller" 00:52:30.472 } 00:52:30.472 EOF 00:52:30.472 )") 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:30.472 { 00:52:30.472 "params": { 00:52:30.472 "name": "Nvme$subsystem", 00:52:30.472 "trtype": "$TEST_TRANSPORT", 00:52:30.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:30.472 "adrfam": "ipv4", 00:52:30.472 "trsvcid": "$NVMF_PORT", 00:52:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:30.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:30.472 "hdgst": ${hdgst:-false}, 00:52:30.472 "ddgst": ${ddgst:-false} 00:52:30.472 }, 00:52:30.472 "method": "bdev_nvme_attach_controller" 00:52:30.472 } 00:52:30.472 EOF 00:52:30.472 )") 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:52:30.472 "params": { 00:52:30.472 "name": "Nvme0", 00:52:30.472 "trtype": "tcp", 00:52:30.472 "traddr": "10.0.0.3", 00:52:30.472 "adrfam": "ipv4", 00:52:30.472 "trsvcid": "4420", 00:52:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:30.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:52:30.472 "hdgst": false, 00:52:30.472 "ddgst": false 00:52:30.472 }, 00:52:30.472 "method": "bdev_nvme_attach_controller" 00:52:30.472 },{ 00:52:30.472 "params": { 00:52:30.472 "name": "Nvme1", 00:52:30.472 "trtype": "tcp", 00:52:30.472 "traddr": "10.0.0.3", 00:52:30.472 "adrfam": "ipv4", 00:52:30.472 "trsvcid": "4420", 00:52:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:30.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:30.472 "hdgst": false, 00:52:30.472 "ddgst": false 00:52:30.472 }, 00:52:30.472 "method": "bdev_nvme_attach_controller" 00:52:30.472 },{ 00:52:30.472 "params": { 00:52:30.472 "name": "Nvme2", 00:52:30.472 "trtype": "tcp", 00:52:30.472 "traddr": "10.0.0.3", 00:52:30.472 "adrfam": "ipv4", 00:52:30.472 "trsvcid": "4420", 00:52:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:52:30.472 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:52:30.472 "hdgst": false, 00:52:30.472 "ddgst": false 00:52:30.472 }, 00:52:30.472 "method": "bdev_nvme_attach_controller" 00:52:30.472 }' 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:30.472 05:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:30.472 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:52:30.472 ... 00:52:30.472 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:52:30.472 ... 00:52:30.472 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:52:30.472 ... 00:52:30.472 fio-3.35 00:52:30.472 Starting 24 threads 00:52:48.539 00:52:48.539 filename0: (groupid=0, jobs=1): err= 0: pid=83038: Mon Dec 9 05:43:35 2024 00:52:48.539 read: IOPS=554, BW=2217KiB/s (2270kB/s)(21.7MiB/10011msec) 00:52:48.539 slat (usec): min=4, max=8032, avg=20.41, stdev=178.50 00:52:48.539 clat (msec): min=8, max=156, avg=28.78, stdev=18.51 00:52:48.539 lat (msec): min=8, max=156, avg=28.80, stdev=18.52 00:52:48.539 clat percentiles (msec): 00:52:48.539 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 16], 00:52:48.539 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 29], 00:52:48.539 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 71], 00:52:48.539 | 99.00th=[ 99], 99.50th=[ 123], 99.90th=[ 136], 99.95th=[ 136], 00:52:48.539 | 99.99th=[ 157] 00:52:48.539 bw ( KiB/s): min= 624, max= 3800, per=4.96%, avg=2159.32, stdev=990.10, samples=19 00:52:48.539 iops : min= 156, max= 950, avg=539.79, stdev=247.54, samples=19 00:52:48.539 lat (msec) : 10=0.78%, 20=40.57%, 50=51.91%, 100=5.79%, 250=0.96% 00:52:48.539 cpu : usr=61.92%, sys=3.41%, ctx=905, majf=0, minf=9 00:52:48.539 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=5548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83039: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=576, BW=2308KiB/s (2363kB/s)(22.6MiB/10010msec) 00:52:48.540 slat (usec): min=4, max=4024, avg=19.21, stdev=79.37 00:52:48.540 clat (msec): min=8, max=152, avg=27.65, stdev=16.61 00:52:48.540 lat (msec): min=8, max=152, avg=27.67, stdev=16.61 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 16], 00:52:48.540 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 28], 00:52:48.540 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 61], 00:52:48.540 | 99.00th=[ 92], 99.50th=[ 100], 99.90th=[ 129], 99.95th=[ 129], 00:52:48.540 | 99.99th=[ 153] 00:52:48.540 bw ( KiB/s): min= 728, max= 3720, per=5.16%, avg=2244.89, stdev=971.90, samples=19 00:52:48.540 iops : min= 182, max= 930, avg=561.21, stdev=242.98, samples=19 00:52:48.540 lat (msec) : 10=0.99%, 20=42.11%, 50=50.34%, 100=6.11%, 250=0.45% 00:52:48.540 cpu : usr=68.99%, sys=4.28%, ctx=784, majf=0, minf=9 00:52:48.540 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=83.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=87.0%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=5775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83040: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=360, BW=1442KiB/s (1477kB/s)(14.2MiB/10059msec) 00:52:48.540 slat (usec): min=6, max=4023, avg=16.17, stdev=115.34 00:52:48.540 clat (usec): min=1267, max=131151, avg=44220.00, stdev=19150.55 00:52:48.540 lat (usec): min=1283, max=131169, avg=44236.18, stdev=19148.89 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 23], 20.00th=[ 31], 00:52:48.540 | 30.00th=[ 35], 40.00th=[ 39], 50.00th=[ 42], 60.00th=[ 48], 00:52:48.540 | 70.00th=[ 52], 80.00th=[ 60], 90.00th=[ 70], 95.00th=[ 75], 00:52:48.540 | 99.00th=[ 103], 99.50th=[ 109], 99.90th=[ 130], 99.95th=[ 132], 00:52:48.540 | 99.99th=[ 132] 00:52:48.540 bw ( KiB/s): min= 752, max= 3337, per=3.32%, avg=1443.35, stdev=568.15, samples=20 00:52:48.540 iops : min= 188, max= 834, avg=360.80, stdev=141.99, samples=20 00:52:48.540 lat (msec) : 2=0.06%, 4=0.22%, 10=1.76%, 20=7.06%, 50=59.97% 00:52:48.540 lat (msec) : 100=29.83%, 250=1.10% 00:52:48.540 cpu : usr=38.22%, sys=2.66%, ctx=1298, majf=0, minf=9 00:52:48.540 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=75.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=89.9%, 8=8.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=3627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83041: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=391, BW=1566KiB/s (1603kB/s)(15.4MiB/10059msec) 00:52:48.540 slat (usec): min=4, max=4028, avg=19.77, stdev=168.95 00:52:48.540 clat (usec): min=1229, max=105331, avg=40716.27, stdev=17451.31 00:52:48.540 lat (usec): min=1245, max=105346, avg=40736.04, stdev=17450.28 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 27], 00:52:48.540 | 30.00th=[ 32], 40.00th=[ 36], 50.00th=[ 39], 60.00th=[ 44], 00:52:48.540 | 70.00th=[ 48], 80.00th=[ 55], 90.00th=[ 62], 95.00th=[ 73], 00:52:48.540 | 99.00th=[ 91], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:52:48.540 | 99.99th=[ 106] 00:52:48.540 bw ( KiB/s): min= 872, max= 3648, per=3.60%, avg=1567.95, stdev=602.68, samples=20 00:52:48.540 iops : min= 218, max= 912, avg=391.95, stdev=150.64, samples=20 00:52:48.540 lat (msec) : 2=0.05%, 4=0.10%, 10=2.01%, 20=7.95%, 50=63.20% 00:52:48.540 lat (msec) : 100=26.19%, 250=0.51% 00:52:48.540 cpu : usr=45.43%, sys=2.96%, ctx=1393, majf=0, minf=9 00:52:48.540 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=3937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83042: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.9MiB/10053msec) 00:52:48.540 slat (usec): min=4, max=10411, avg=25.79, stdev=290.07 00:52:48.540 clat (msec): min=5, max=106, avg=45.17, stdev=17.18 00:52:48.540 lat (msec): min=5, max=106, avg=45.19, stdev=17.19 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 34], 00:52:48.540 | 30.00th=[ 36], 40.00th=[ 40], 50.00th=[ 46], 60.00th=[ 48], 00:52:48.540 | 70.00th=[ 53], 80.00th=[ 59], 90.00th=[ 67], 95.00th=[ 73], 00:52:48.540 | 99.00th=[ 91], 99.50th=[ 104], 99.90th=[ 106], 99.95th=[ 107], 00:52:48.540 | 99.99th=[ 107] 00:52:48.540 bw ( KiB/s): min= 816, max= 3088, per=3.25%, avg=1412.30, stdev=478.53, samples=20 00:52:48.540 iops : min= 204, max= 772, avg=353.05, stdev=119.63, samples=20 00:52:48.540 lat (msec) : 10=1.52%, 20=6.15%, 50=57.82%, 100=33.83%, 250=0.68% 00:52:48.540 cpu : usr=36.91%, sys=2.38%, ctx=1326, majf=0, minf=9 00:52:48.540 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=70.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=90.7%, 8=6.8%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=3547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83043: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10033msec) 00:52:48.540 slat (usec): min=4, max=4025, avg=18.02, stdev=150.77 00:52:48.540 clat (msec): min=8, max=119, avg=32.34, stdev=16.07 00:52:48.540 lat (msec): min=8, max=119, avg=32.35, stdev=16.07 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:52:48.540 | 30.00th=[ 23], 40.00th=[ 25], 50.00th=[ 30], 60.00th=[ 36], 00:52:48.540 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 48], 95.00th=[ 70], 00:52:48.540 | 99.00th=[ 87], 99.50th=[ 95], 99.90th=[ 107], 99.95th=[ 107], 00:52:48.540 | 99.99th=[ 121] 00:52:48.540 bw ( KiB/s): min= 896, max= 3632, per=4.54%, avg=1976.74, stdev=796.35, samples=19 00:52:48.540 iops : min= 224, max= 908, avg=494.16, stdev=199.10, samples=19 00:52:48.540 lat (msec) : 10=0.38%, 20=23.54%, 50=66.54%, 100=9.21%, 250=0.32% 00:52:48.540 cpu : usr=54.27%, sys=3.27%, ctx=1006, majf=0, minf=9 00:52:48.540 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=82.7%, 16=16.9%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=87.7%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=4949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83044: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=366, BW=1466KiB/s (1501kB/s)(14.4MiB/10036msec) 00:52:48.540 slat (usec): min=5, max=8031, avg=26.00, stdev=257.17 00:52:48.540 clat (msec): min=8, max=104, avg=43.48, stdev=16.89 00:52:48.540 lat (msec): min=8, max=104, avg=43.50, stdev=16.89 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 13], 5.00th=[ 20], 10.00th=[ 24], 20.00th=[ 29], 00:52:48.540 | 30.00th=[ 34], 40.00th=[ 38], 50.00th=[ 41], 60.00th=[ 48], 00:52:48.540 | 70.00th=[ 52], 80.00th=[ 58], 90.00th=[ 67], 95.00th=[ 74], 00:52:48.540 | 99.00th=[ 91], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 105], 00:52:48.540 | 99.99th=[ 105] 00:52:48.540 bw ( KiB/s): min= 944, max= 2496, per=3.37%, avg=1467.20, stdev=451.63, samples=20 00:52:48.540 iops : min= 236, max= 624, avg=366.80, stdev=112.91, samples=20 00:52:48.540 lat (msec) : 10=0.08%, 20=5.17%, 50=62.34%, 100=32.03%, 250=0.38% 00:52:48.540 cpu : usr=42.57%, sys=2.73%, ctx=1223, majf=0, minf=9 00:52:48.540 IO depths : 1=0.2%, 2=1.8%, 4=7.1%, 8=75.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:52:48.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 complete : 0=0.0%, 4=89.2%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.540 issued rwts: total=3678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.540 filename0: (groupid=0, jobs=1): err= 0: pid=83045: Mon Dec 9 05:43:35 2024 00:52:48.540 read: IOPS=554, BW=2217KiB/s (2270kB/s)(21.7MiB/10002msec) 00:52:48.540 slat (usec): min=4, max=10023, avg=22.01, stdev=180.45 00:52:48.540 clat (msec): min=3, max=157, avg=28.78, stdev=17.65 00:52:48.540 lat (msec): min=3, max=157, avg=28.80, stdev=17.66 00:52:48.540 clat percentiles (msec): 00:52:48.540 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 17], 00:52:48.540 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 29], 00:52:48.540 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 72], 00:52:48.541 | 99.00th=[ 97], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 00:52:48.541 | 99.99th=[ 157] 00:52:48.541 bw ( KiB/s): min= 640, max= 4032, per=4.91%, avg=2133.53, stdev=931.03, samples=19 00:52:48.541 iops : min= 160, max= 1008, avg=533.37, stdev=232.76, samples=19 00:52:48.541 lat (msec) : 4=0.13%, 10=2.22%, 20=34.15%, 50=56.64%, 100=6.04% 00:52:48.541 lat (msec) : 250=0.83% 00:52:48.541 cpu : usr=59.97%, sys=3.89%, ctx=847, majf=0, minf=9 00:52:48.541 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=80.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=5544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83046: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=551, BW=2207KiB/s (2260kB/s)(21.6MiB/10006msec) 00:52:48.541 slat (usec): min=3, max=7026, avg=18.58, stdev=111.54 00:52:48.541 clat (msec): min=6, max=160, avg=28.91, stdev=18.26 00:52:48.541 lat (msec): min=6, max=160, avg=28.92, stdev=18.26 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 15], 20.00th=[ 17], 00:52:48.541 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 28], 00:52:48.541 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 74], 00:52:48.541 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 130], 99.95th=[ 130], 00:52:48.541 | 99.99th=[ 161] 00:52:48.541 bw ( KiB/s): min= 640, max= 3624, per=4.96%, avg=2158.00, stdev=968.51, samples=19 00:52:48.541 iops : min= 160, max= 906, avg=539.47, stdev=242.17, samples=19 00:52:48.541 lat (msec) : 10=1.34%, 20=38.02%, 50=52.83%, 100=6.39%, 250=1.41% 00:52:48.541 cpu : usr=70.14%, sys=4.24%, ctx=1005, majf=0, minf=9 00:52:48.541 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=5521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83047: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=584, BW=2339KiB/s (2395kB/s)(22.9MiB/10004msec) 00:52:48.541 slat (usec): min=3, max=4034, avg=21.66, stdev=123.92 00:52:48.541 clat (msec): min=5, max=163, avg=27.27, stdev=17.06 00:52:48.541 lat (msec): min=5, max=163, avg=27.29, stdev=17.07 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 15], 00:52:48.541 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 24], 60.00th=[ 27], 00:52:48.541 | 70.00th=[ 33], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 65], 00:52:48.541 | 99.00th=[ 91], 99.50th=[ 99], 99.90th=[ 129], 99.95th=[ 129], 00:52:48.541 | 99.99th=[ 165] 00:52:48.541 bw ( KiB/s): min= 641, max= 4152, per=5.22%, avg=2270.95, stdev=1044.08, samples=19 00:52:48.541 iops : min= 160, max= 1038, avg=567.68, stdev=261.08, samples=19 00:52:48.541 lat (msec) : 10=1.85%, 20=43.64%, 50=47.74%, 100=6.39%, 250=0.38% 00:52:48.541 cpu : usr=74.38%, sys=4.38%, ctx=602, majf=0, minf=9 00:52:48.541 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=5850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83048: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=375, BW=1502KiB/s (1539kB/s)(14.7MiB/10029msec) 00:52:48.541 slat (usec): min=3, max=12029, avg=26.36, stdev=311.68 00:52:48.541 clat (msec): min=10, max=103, avg=42.46, stdev=15.20 00:52:48.541 lat (msec): min=10, max=103, avg=42.48, stdev=15.20 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 31], 00:52:48.541 | 30.00th=[ 34], 40.00th=[ 37], 50.00th=[ 41], 60.00th=[ 46], 00:52:48.541 | 70.00th=[ 48], 80.00th=[ 54], 90.00th=[ 61], 95.00th=[ 71], 00:52:48.541 | 99.00th=[ 90], 99.50th=[ 97], 99.90th=[ 102], 99.95th=[ 104], 00:52:48.541 | 99.99th=[ 104] 00:52:48.541 bw ( KiB/s): min= 944, max= 2310, per=3.45%, avg=1502.30, stdev=349.21, samples=20 00:52:48.541 iops : min= 236, max= 577, avg=375.55, stdev=87.24, samples=20 00:52:48.541 lat (msec) : 20=3.48%, 50=72.02%, 100=24.26%, 250=0.24% 00:52:48.541 cpu : usr=39.91%, sys=2.73%, ctx=1281, majf=0, minf=9 00:52:48.541 IO depths : 1=0.1%, 2=1.0%, 4=3.6%, 8=79.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=3767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83049: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.2MiB/10001msec) 00:52:48.541 slat (usec): min=5, max=4031, avg=17.71, stdev=123.11 00:52:48.541 clat (msec): min=3, max=163, avg=30.88, stdev=19.29 00:52:48.541 lat (msec): min=3, max=163, avg=30.90, stdev=19.29 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 17], 00:52:48.541 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 25], 60.00th=[ 33], 00:52:48.541 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 74], 00:52:48.541 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 128], 99.95th=[ 128], 00:52:48.541 | 99.99th=[ 165] 00:52:48.541 bw ( KiB/s): min= 641, max= 3496, per=4.60%, avg=2000.95, stdev=921.20, samples=19 00:52:48.541 iops : min= 160, max= 874, avg=500.21, stdev=230.33, samples=19 00:52:48.541 lat (msec) : 4=0.14%, 10=1.18%, 20=35.68%, 50=55.28%, 100=5.67% 00:52:48.541 lat (msec) : 250=2.05% 00:52:48.541 cpu : usr=48.17%, sys=2.74%, ctx=1308, majf=0, minf=9 00:52:48.541 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=76.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=89.6%, 8=9.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83050: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=526, BW=2105KiB/s (2156kB/s)(20.6MiB/10005msec) 00:52:48.541 slat (usec): min=3, max=8025, avg=24.23, stdev=204.61 00:52:48.541 clat (msec): min=8, max=163, avg=30.26, stdev=18.84 00:52:48.541 lat (msec): min=8, max=163, avg=30.28, stdev=18.84 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 17], 00:52:48.541 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 25], 60.00th=[ 31], 00:52:48.541 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 77], 00:52:48.541 | 99.00th=[ 103], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 138], 00:52:48.541 | 99.99th=[ 163] 00:52:48.541 bw ( KiB/s): min= 640, max= 3424, per=4.77%, avg=2074.79, stdev=919.84, samples=19 00:52:48.541 iops : min= 160, max= 856, avg=518.63, stdev=230.02, samples=19 00:52:48.541 lat (msec) : 10=0.72%, 20=33.21%, 50=58.70%, 100=5.85%, 250=1.52% 00:52:48.541 cpu : usr=57.66%, sys=3.43%, ctx=1345, majf=0, minf=10 00:52:48.541 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=76.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=5266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83051: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10001msec) 00:52:48.541 slat (usec): min=4, max=5023, avg=23.53, stdev=170.89 00:52:48.541 clat (msec): min=3, max=179, avg=27.80, stdev=17.72 00:52:48.541 lat (msec): min=3, max=179, avg=27.82, stdev=17.72 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 16], 00:52:48.541 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 28], 00:52:48.541 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 70], 00:52:48.541 | 99.00th=[ 94], 99.50th=[ 109], 99.90th=[ 157], 99.95th=[ 157], 00:52:48.541 | 99.99th=[ 180] 00:52:48.541 bw ( KiB/s): min= 641, max= 4168, per=5.11%, avg=2220.42, stdev=1009.75, samples=19 00:52:48.541 iops : min= 160, max= 1042, avg=555.05, stdev=252.47, samples=19 00:52:48.541 lat (msec) : 4=0.10%, 10=3.02%, 20=39.52%, 50=50.30%, 100=6.54% 00:52:48.541 lat (msec) : 250=0.52% 00:52:48.541 cpu : usr=58.61%, sys=3.37%, ctx=1791, majf=0, minf=9 00:52:48.541 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:52:48.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.541 issued rwts: total=5736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.541 filename1: (groupid=0, jobs=1): err= 0: pid=83052: Mon Dec 9 05:43:35 2024 00:52:48.541 read: IOPS=361, BW=1445KiB/s (1480kB/s)(14.2MiB/10035msec) 00:52:48.541 slat (usec): min=3, max=11034, avg=27.94, stdev=363.01 00:52:48.541 clat (msec): min=10, max=107, avg=44.10, stdev=15.91 00:52:48.541 lat (msec): min=10, max=107, avg=44.13, stdev=15.92 00:52:48.541 clat percentiles (msec): 00:52:48.541 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 34], 00:52:48.541 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 43], 60.00th=[ 48], 00:52:48.541 | 70.00th=[ 49], 80.00th=[ 59], 90.00th=[ 63], 95.00th=[ 72], 00:52:48.541 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 107], 99.95th=[ 108], 00:52:48.541 | 99.99th=[ 108] 00:52:48.541 bw ( KiB/s): min= 952, max= 2304, per=3.32%, avg=1445.70, stdev=367.88, samples=20 00:52:48.541 iops : min= 238, max= 576, avg=361.40, stdev=91.96, samples=20 00:52:48.542 lat (msec) : 20=2.59%, 50=70.57%, 100=26.48%, 250=0.36% 00:52:48.542 cpu : usr=31.19%, sys=1.97%, ctx=1007, majf=0, minf=9 00:52:48.542 IO depths : 1=0.2%, 2=1.7%, 4=6.5%, 8=76.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=89.3%, 8=9.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=3626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename1: (groupid=0, jobs=1): err= 0: pid=83053: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=527, BW=2109KiB/s (2159kB/s)(20.6MiB/10009msec) 00:52:48.542 slat (usec): min=4, max=5030, avg=21.78, stdev=159.86 00:52:48.542 clat (msec): min=6, max=132, avg=30.24, stdev=18.89 00:52:48.542 lat (msec): min=6, max=132, avg=30.26, stdev=18.88 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 17], 00:52:48.542 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 31], 00:52:48.542 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 78], 00:52:48.542 | 99.00th=[ 102], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 121], 00:52:48.542 | 99.99th=[ 133] 00:52:48.542 bw ( KiB/s): min= 640, max= 3664, per=4.72%, avg=2051.32, stdev=939.41, samples=19 00:52:48.542 iops : min= 160, max= 916, avg=512.79, stdev=234.90, samples=19 00:52:48.542 lat (msec) : 10=1.57%, 20=34.27%, 50=56.05%, 100=6.82%, 250=1.29% 00:52:48.542 cpu : usr=44.91%, sys=2.97%, ctx=1550, majf=0, minf=9 00:52:48.542 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=77.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=5276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename2: (groupid=0, jobs=1): err= 0: pid=83054: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=366, BW=1467KiB/s (1502kB/s)(14.4MiB/10050msec) 00:52:48.542 slat (usec): min=4, max=8029, avg=29.95, stdev=342.50 00:52:48.542 clat (msec): min=11, max=107, avg=43.43, stdev=16.22 00:52:48.542 lat (msec): min=11, max=107, avg=43.46, stdev=16.22 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 31], 00:52:48.542 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 42], 60.00th=[ 48], 00:52:48.542 | 70.00th=[ 49], 80.00th=[ 58], 90.00th=[ 63], 95.00th=[ 72], 00:52:48.542 | 99.00th=[ 93], 99.50th=[ 104], 99.90th=[ 108], 99.95th=[ 108], 00:52:48.542 | 99.99th=[ 108] 00:52:48.542 bw ( KiB/s): min= 920, max= 2528, per=3.37%, avg=1466.60, stdev=398.04, samples=20 00:52:48.542 iops : min= 230, max= 632, avg=366.65, stdev=99.51, samples=20 00:52:48.542 lat (msec) : 20=3.69%, 50=70.01%, 100=25.67%, 250=0.62% 00:52:48.542 cpu : usr=34.64%, sys=2.47%, ctx=1076, majf=0, minf=9 00:52:48.542 IO depths : 1=0.2%, 2=1.3%, 4=4.8%, 8=77.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=3685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename2: (groupid=0, jobs=1): err= 0: pid=83055: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=357, BW=1430KiB/s (1464kB/s)(14.0MiB/10039msec) 00:52:48.542 slat (usec): min=6, max=4021, avg=15.39, stdev=84.11 00:52:48.542 clat (msec): min=12, max=114, avg=44.65, stdev=16.03 00:52:48.542 lat (msec): min=12, max=114, avg=44.67, stdev=16.03 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 33], 00:52:48.542 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 44], 60.00th=[ 48], 00:52:48.542 | 70.00th=[ 51], 80.00th=[ 60], 90.00th=[ 65], 95.00th=[ 72], 00:52:48.542 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 104], 99.95th=[ 104], 00:52:48.542 | 99.99th=[ 115] 00:52:48.542 bw ( KiB/s): min= 912, max= 2400, per=3.28%, avg=1428.80, stdev=369.26, samples=20 00:52:48.542 iops : min= 228, max= 600, avg=357.20, stdev=92.32, samples=20 00:52:48.542 lat (msec) : 20=4.54%, 50=65.55%, 100=29.63%, 250=0.28% 00:52:48.542 cpu : usr=33.24%, sys=2.21%, ctx=1061, majf=0, minf=9 00:52:48.542 IO depths : 1=0.2%, 2=1.2%, 4=4.6%, 8=77.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=89.1%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=3588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename2: (groupid=0, jobs=1): err= 0: pid=83056: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=356, BW=1426KiB/s (1460kB/s)(14.0MiB/10048msec) 00:52:48.542 slat (usec): min=6, max=8026, avg=23.40, stdev=275.85 00:52:48.542 clat (msec): min=9, max=120, avg=44.74, stdev=16.10 00:52:48.542 lat (msec): min=9, max=120, avg=44.76, stdev=16.11 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 34], 00:52:48.542 | 30.00th=[ 36], 40.00th=[ 38], 50.00th=[ 45], 60.00th=[ 48], 00:52:48.542 | 70.00th=[ 50], 80.00th=[ 59], 90.00th=[ 63], 95.00th=[ 73], 00:52:48.542 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 107], 99.95th=[ 109], 00:52:48.542 | 99.99th=[ 121] 00:52:48.542 bw ( KiB/s): min= 864, max= 2528, per=3.28%, avg=1425.10, stdev=386.60, samples=20 00:52:48.542 iops : min= 216, max= 632, avg=356.25, stdev=96.65, samples=20 00:52:48.542 lat (msec) : 10=0.17%, 20=2.99%, 50=68.25%, 100=28.20%, 250=0.39% 00:52:48.542 cpu : usr=30.93%, sys=2.27%, ctx=980, majf=0, minf=9 00:52:48.542 IO depths : 1=0.2%, 2=1.7%, 4=6.6%, 8=75.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=89.6%, 8=9.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=3581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename2: (groupid=0, jobs=1): err= 0: pid=83057: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=550, BW=2202KiB/s (2255kB/s)(21.5MiB/10013msec) 00:52:48.542 slat (usec): min=4, max=4031, avg=17.54, stdev=76.60 00:52:48.542 clat (msec): min=8, max=135, avg=28.98, stdev=18.89 00:52:48.542 lat (msec): min=8, max=135, avg=28.99, stdev=18.89 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 16], 00:52:48.542 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 29], 00:52:48.542 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 73], 00:52:48.542 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 130], 00:52:48.542 | 99.99th=[ 136] 00:52:48.542 bw ( KiB/s): min= 640, max= 4168, per=4.97%, avg=2163.58, stdev=1068.86, samples=19 00:52:48.542 iops : min= 160, max= 1042, avg=540.89, stdev=267.21, samples=19 00:52:48.542 lat (msec) : 10=0.98%, 20=43.46%, 50=47.98%, 100=6.35%, 250=1.23% 00:52:48.542 cpu : usr=73.91%, sys=4.52%, ctx=737, majf=0, minf=9 00:52:48.542 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=5513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename2: (groupid=0, jobs=1): err= 0: pid=83058: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=354, BW=1417KiB/s (1451kB/s)(13.9MiB/10049msec) 00:52:48.542 slat (usec): min=5, max=7030, avg=18.69, stdev=169.59 00:52:48.542 clat (msec): min=9, max=105, avg=45.01, stdev=15.90 00:52:48.542 lat (msec): min=9, max=105, avg=45.02, stdev=15.90 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 33], 00:52:48.542 | 30.00th=[ 36], 40.00th=[ 40], 50.00th=[ 44], 60.00th=[ 48], 00:52:48.542 | 70.00th=[ 52], 80.00th=[ 57], 90.00th=[ 67], 95.00th=[ 75], 00:52:48.542 | 99.00th=[ 93], 99.50th=[ 100], 99.90th=[ 106], 99.95th=[ 106], 00:52:48.542 | 99.99th=[ 106] 00:52:48.542 bw ( KiB/s): min= 920, max= 2464, per=3.26%, avg=1417.05, stdev=370.79, samples=20 00:52:48.542 iops : min= 230, max= 616, avg=354.25, stdev=92.70, samples=20 00:52:48.542 lat (msec) : 10=0.06%, 20=2.36%, 50=65.63%, 100=31.48%, 250=0.48% 00:52:48.542 cpu : usr=41.81%, sys=2.62%, ctx=1393, majf=0, minf=9 00:52:48.542 IO depths : 1=0.3%, 2=2.0%, 4=7.5%, 8=74.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=3561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.542 filename2: (groupid=0, jobs=1): err= 0: pid=83059: Mon Dec 9 05:43:35 2024 00:52:48.542 read: IOPS=530, BW=2121KiB/s (2172kB/s)(20.7MiB/10010msec) 00:52:48.542 slat (usec): min=4, max=4027, avg=16.15, stdev=96.47 00:52:48.542 clat (msec): min=8, max=147, avg=30.10, stdev=18.91 00:52:48.542 lat (msec): min=8, max=147, avg=30.11, stdev=18.91 00:52:48.542 clat percentiles (msec): 00:52:48.542 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 17], 00:52:48.542 | 30.00th=[ 17], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 34], 00:52:48.542 | 70.00th=[ 38], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 72], 00:52:48.542 | 99.00th=[ 101], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 134], 00:52:48.542 | 99.99th=[ 148] 00:52:48.542 bw ( KiB/s): min= 640, max= 4248, per=4.72%, avg=2052.47, stdev=989.26, samples=19 00:52:48.542 iops : min= 160, max= 1062, avg=513.11, stdev=247.32, samples=19 00:52:48.542 lat (msec) : 10=1.56%, 20=38.09%, 50=53.02%, 100=6.37%, 250=0.96% 00:52:48.542 cpu : usr=48.37%, sys=3.32%, ctx=1201, majf=0, minf=9 00:52:48.542 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.3%, 16=17.2%, 32=0.0%, >=64=0.0% 00:52:48.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 complete : 0=0.0%, 4=88.3%, 8=11.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.542 issued rwts: total=5309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.543 filename2: (groupid=0, jobs=1): err= 0: pid=83060: Mon Dec 9 05:43:35 2024 00:52:48.543 read: IOPS=370, BW=1482KiB/s (1517kB/s)(14.5MiB/10039msec) 00:52:48.543 slat (usec): min=5, max=10017, avg=24.15, stdev=287.93 00:52:48.543 clat (msec): min=11, max=119, avg=43.05, stdev=15.95 00:52:48.543 lat (msec): min=11, max=119, avg=43.08, stdev=15.95 00:52:48.543 clat percentiles (msec): 00:52:48.543 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 33], 00:52:48.543 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 40], 60.00th=[ 47], 00:52:48.543 | 70.00th=[ 48], 80.00th=[ 57], 90.00th=[ 62], 95.00th=[ 72], 00:52:48.543 | 99.00th=[ 92], 99.50th=[ 99], 99.90th=[ 107], 99.95th=[ 107], 00:52:48.543 | 99.99th=[ 121] 00:52:48.543 bw ( KiB/s): min= 920, max= 2496, per=3.41%, avg=1481.20, stdev=385.26, samples=20 00:52:48.543 iops : min= 230, max= 624, avg=370.30, stdev=96.32, samples=20 00:52:48.543 lat (msec) : 20=2.61%, 50=71.82%, 100=25.22%, 250=0.35% 00:52:48.543 cpu : usr=31.16%, sys=2.06%, ctx=908, majf=0, minf=9 00:52:48.543 IO depths : 1=0.2%, 2=0.9%, 4=3.5%, 8=79.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:52:48.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.543 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.543 issued rwts: total=3719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.543 filename2: (groupid=0, jobs=1): err= 0: pid=83061: Mon Dec 9 05:43:35 2024 00:52:48.543 read: IOPS=356, BW=1428KiB/s (1462kB/s)(14.0MiB/10044msec) 00:52:48.543 slat (usec): min=5, max=8039, avg=25.12, stdev=265.40 00:52:48.543 clat (msec): min=5, max=107, avg=44.63, stdev=16.08 00:52:48.543 lat (msec): min=5, max=107, avg=44.66, stdev=16.08 00:52:48.543 clat percentiles (msec): 00:52:48.543 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 33], 00:52:48.543 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 44], 60.00th=[ 47], 00:52:48.543 | 70.00th=[ 52], 80.00th=[ 59], 90.00th=[ 66], 95.00th=[ 73], 00:52:48.543 | 99.00th=[ 89], 99.50th=[ 97], 99.90th=[ 108], 99.95th=[ 108], 00:52:48.543 | 99.99th=[ 108] 00:52:48.543 bw ( KiB/s): min= 944, max= 2704, per=3.28%, avg=1427.30, stdev=411.65, samples=20 00:52:48.543 iops : min= 236, max= 676, avg=356.80, stdev=102.91, samples=20 00:52:48.543 lat (msec) : 10=0.06%, 20=4.74%, 50=63.74%, 100=31.19%, 250=0.28% 00:52:48.543 cpu : usr=41.63%, sys=2.87%, ctx=1327, majf=0, minf=9 00:52:48.543 IO depths : 1=0.2%, 2=2.4%, 4=9.5%, 8=72.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:52:48.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.543 complete : 0=0.0%, 4=90.1%, 8=7.8%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:48.543 issued rwts: total=3585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:48.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:52:48.543 00:52:48.543 Run status group 0 (all jobs): 00:52:48.543 READ: bw=42.5MiB/s (44.5MB/s), 1411KiB/s-2339KiB/s (1445kB/s-2395kB/s), io=427MiB (448MB), run=10001-10059msec 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.801 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 bdev_null0 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 [2024-12-09 05:43:35.682874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 bdev_null1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:48.802 { 00:52:48.802 "params": { 00:52:48.802 "name": "Nvme$subsystem", 00:52:48.802 "trtype": "$TEST_TRANSPORT", 00:52:48.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:48.802 "adrfam": "ipv4", 00:52:48.802 "trsvcid": "$NVMF_PORT", 00:52:48.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:48.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:48.802 "hdgst": ${hdgst:-false}, 00:52:48.802 "ddgst": ${ddgst:-false} 00:52:48.802 }, 00:52:48.802 "method": "bdev_nvme_attach_controller" 00:52:48.802 } 00:52:48.802 EOF 00:52:48.802 )") 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:48.802 { 00:52:48.802 "params": { 00:52:48.802 "name": "Nvme$subsystem", 00:52:48.802 "trtype": "$TEST_TRANSPORT", 00:52:48.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:48.802 "adrfam": "ipv4", 00:52:48.802 "trsvcid": "$NVMF_PORT", 00:52:48.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:48.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:48.802 "hdgst": ${hdgst:-false}, 00:52:48.802 "ddgst": ${ddgst:-false} 00:52:48.802 }, 00:52:48.802 "method": "bdev_nvme_attach_controller" 00:52:48.802 } 00:52:48.802 EOF 00:52:48.802 )") 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:52:48.802 05:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:52:48.802 "params": { 00:52:48.802 "name": "Nvme0", 00:52:48.802 "trtype": "tcp", 00:52:48.802 "traddr": "10.0.0.3", 00:52:48.802 "adrfam": "ipv4", 00:52:48.802 "trsvcid": "4420", 00:52:48.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:48.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:52:48.802 "hdgst": false, 00:52:48.802 "ddgst": false 00:52:48.802 }, 00:52:48.802 "method": "bdev_nvme_attach_controller" 00:52:48.802 },{ 00:52:48.802 "params": { 00:52:48.802 "name": "Nvme1", 00:52:48.802 "trtype": "tcp", 00:52:48.802 "traddr": "10.0.0.3", 00:52:48.802 "adrfam": "ipv4", 00:52:48.802 "trsvcid": "4420", 00:52:48.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:48.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:48.803 "hdgst": false, 00:52:48.803 "ddgst": false 00:52:48.803 }, 00:52:48.803 "method": "bdev_nvme_attach_controller" 00:52:48.803 }' 00:52:48.803 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:48.803 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:48.803 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:48.803 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:48.803 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:52:48.803 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:49.061 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:49.061 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:49.061 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:49.061 05:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:49.061 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:52:49.061 ... 00:52:49.061 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:52:49.061 ... 00:52:49.061 fio-3.35 00:52:49.061 Starting 4 threads 00:52:55.617 00:52:55.617 filename0: (groupid=0, jobs=1): err= 0: pid=83275: Mon Dec 9 05:43:41 2024 00:52:55.617 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5001msec) 00:52:55.617 slat (nsec): min=3390, max=57540, avg=14147.78, stdev=4873.26 00:52:55.617 clat (usec): min=937, max=6026, avg=4263.23, stdev=342.25 00:52:55.617 lat (usec): min=945, max=6044, avg=4277.37, stdev=342.34 00:52:55.617 clat percentiles (usec): 00:52:55.617 | 1.00th=[ 3097], 5.00th=[ 3818], 10.00th=[ 3949], 20.00th=[ 4113], 00:52:55.617 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:52:55.617 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:52:55.617 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5080], 99.95th=[ 5145], 00:52:55.617 | 99.99th=[ 5997] 00:52:55.617 bw ( KiB/s): min=14208, max=15792, per=21.53%, avg=14732.44, stdev=557.18, samples=9 00:52:55.617 iops : min= 1776, max= 1974, avg=1841.56, stdev=69.65, samples=9 00:52:55.617 lat (usec) : 1000=0.03% 00:52:55.617 lat (msec) : 2=0.29%, 4=13.16%, 10=86.52% 00:52:55.617 cpu : usr=91.76%, sys=7.46%, ctx=10, majf=0, minf=9 00:52:55.617 IO depths : 1=0.1%, 2=23.7%, 4=51.1%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:55.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.617 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.617 issued rwts: total=9263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:55.617 latency : target=0, window=0, percentile=100.00%, depth=8 00:52:55.617 filename0: (groupid=0, jobs=1): err= 0: pid=83276: Mon Dec 9 05:43:41 2024 00:52:55.617 read: IOPS=2355, BW=18.4MiB/s (19.3MB/s)(92.0MiB/5001msec) 00:52:55.617 slat (nsec): min=6974, max=63170, avg=14373.27, stdev=4756.40 00:52:55.618 clat (usec): min=994, max=5189, avg=3357.85, stdev=930.33 00:52:55.618 lat (usec): min=1004, max=5204, avg=3372.22, stdev=929.84 00:52:55.618 clat percentiles (usec): 00:52:55.618 | 1.00th=[ 1975], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2245], 00:52:55.618 | 30.00th=[ 2442], 40.00th=[ 2868], 50.00th=[ 3884], 60.00th=[ 4015], 00:52:55.618 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4424], 00:52:55.618 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5014], 99.95th=[ 5080], 00:52:55.618 | 99.99th=[ 5145] 00:52:55.618 bw ( KiB/s): min=15488, max=20000, per=28.05%, avg=19199.00, stdev=1440.84, samples=9 00:52:55.618 iops : min= 1936, max= 2500, avg=2399.78, stdev=180.06, samples=9 00:52:55.618 lat (usec) : 1000=0.01% 00:52:55.618 lat (msec) : 2=1.54%, 4=57.19%, 10=41.26% 00:52:55.618 cpu : usr=91.76%, sys=7.26%, ctx=42, majf=0, minf=0 00:52:55.618 IO depths : 1=0.1%, 2=4.3%, 4=61.5%, 8=34.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:55.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.618 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.618 issued rwts: total=11781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:55.618 latency : target=0, window=0, percentile=100.00%, depth=8 00:52:55.618 filename1: (groupid=0, jobs=1): err= 0: pid=83277: Mon Dec 9 05:43:41 2024 00:52:55.618 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.1MiB/5004msec) 00:52:55.618 slat (nsec): min=4683, max=60102, avg=12616.35, stdev=4753.05 00:52:55.618 clat (usec): min=589, max=7557, avg=3960.96, stdev=880.51 00:52:55.618 lat (usec): min=597, max=7572, avg=3973.57, stdev=881.89 00:52:55.618 clat percentiles (usec): 00:52:55.618 | 1.00th=[ 1287], 5.00th=[ 1401], 10.00th=[ 2835], 20.00th=[ 3720], 00:52:55.618 | 30.00th=[ 4113], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:52:55.618 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:52:55.618 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5997], 99.95th=[ 7504], 00:52:55.618 | 99.99th=[ 7570] 00:52:55.618 bw ( KiB/s): min=14208, max=21344, per=22.46%, avg=15371.56, stdev=2293.32, samples=9 00:52:55.618 iops : min= 1776, max= 2668, avg=1921.44, stdev=286.66, samples=9 00:52:55.618 lat (usec) : 750=0.09%, 1000=0.11% 00:52:55.618 lat (msec) : 2=7.50%, 4=20.17%, 10=72.14% 00:52:55.618 cpu : usr=91.49%, sys=7.70%, ctx=8, majf=0, minf=0 00:52:55.618 IO depths : 1=0.1%, 2=17.2%, 4=54.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:55.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.618 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.618 issued rwts: total=9992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:55.618 latency : target=0, window=0, percentile=100.00%, depth=8 00:52:55.618 filename1: (groupid=0, jobs=1): err= 0: pid=83278: Mon Dec 9 05:43:41 2024 00:52:55.618 read: IOPS=2354, BW=18.4MiB/s (19.3MB/s)(92.0MiB/5001msec) 00:52:55.618 slat (nsec): min=7193, max=54189, avg=13791.54, stdev=4399.25 00:52:55.618 clat (usec): min=1119, max=5188, avg=3362.46, stdev=928.81 00:52:55.618 lat (usec): min=1135, max=5203, avg=3376.25, stdev=929.37 00:52:55.618 clat percentiles (usec): 00:52:55.618 | 1.00th=[ 1975], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2245], 00:52:55.618 | 30.00th=[ 2474], 40.00th=[ 2900], 50.00th=[ 3884], 60.00th=[ 4015], 00:52:55.618 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4424], 00:52:55.618 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5080], 99.95th=[ 5145], 00:52:55.618 | 99.99th=[ 5145] 00:52:55.618 bw ( KiB/s): min=15488, max=20000, per=28.05%, avg=19199.00, stdev=1440.84, samples=9 00:52:55.618 iops : min= 1936, max= 2500, avg=2399.78, stdev=180.06, samples=9 00:52:55.618 lat (msec) : 2=1.58%, 4=56.95%, 10=41.47% 00:52:55.618 cpu : usr=91.34%, sys=7.70%, ctx=8, majf=0, minf=0 00:52:55.618 IO depths : 1=0.1%, 2=4.3%, 4=61.5%, 8=34.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:52:55.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.618 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:55.618 issued rwts: total=11773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:55.618 latency : target=0, window=0, percentile=100.00%, depth=8 00:52:55.618 00:52:55.618 Run status group 0 (all jobs): 00:52:55.618 READ: bw=66.8MiB/s (70.1MB/s), 14.5MiB/s-18.4MiB/s (15.2MB/s-19.3MB/s), io=334MiB (351MB), run=5001-5004msec 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 ************************************ 00:52:55.618 END TEST fio_dif_rand_params 00:52:55.618 ************************************ 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 00:52:55.618 real 0m31.263s 00:52:55.618 user 3m13.263s 00:52:55.618 sys 0m10.855s 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 05:43:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:52:55.618 05:43:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:55.618 05:43:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 ************************************ 00:52:55.618 START TEST fio_dif_digest 00:52:55.618 ************************************ 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 bdev_null0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:52:55.618 [2024-12-09 05:43:41.807837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:52:55.618 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:52:55.619 { 00:52:55.619 "params": { 00:52:55.619 "name": "Nvme$subsystem", 00:52:55.619 "trtype": "$TEST_TRANSPORT", 00:52:55.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:52:55.619 "adrfam": "ipv4", 00:52:55.619 "trsvcid": "$NVMF_PORT", 00:52:55.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:52:55.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:52:55.619 "hdgst": ${hdgst:-false}, 00:52:55.619 "ddgst": ${ddgst:-false} 00:52:55.619 }, 00:52:55.619 "method": "bdev_nvme_attach_controller" 00:52:55.619 } 00:52:55.619 EOF 00:52:55.619 )") 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:52:55.619 "params": { 00:52:55.619 "name": "Nvme0", 00:52:55.619 "trtype": "tcp", 00:52:55.619 "traddr": "10.0.0.3", 00:52:55.619 "adrfam": "ipv4", 00:52:55.619 "trsvcid": "4420", 00:52:55.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:55.619 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:52:55.619 "hdgst": true, 00:52:55.619 "ddgst": true 00:52:55.619 }, 00:52:55.619 "method": "bdev_nvme_attach_controller" 00:52:55.619 }' 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:55.619 05:43:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:52:55.619 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:52:55.619 ... 00:52:55.619 fio-3.35 00:52:55.619 Starting 3 threads 00:53:05.598 00:53:05.598 filename0: (groupid=0, jobs=1): err= 0: pid=83384: Mon Dec 9 05:43:52 2024 00:53:05.598 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10001msec) 00:53:05.598 slat (nsec): min=6898, max=38075, avg=9690.69, stdev=3811.92 00:53:05.598 clat (usec): min=6157, max=14642, avg=12384.70, stdev=478.07 00:53:05.598 lat (usec): min=6165, max=14657, avg=12394.39, stdev=478.23 00:53:05.598 clat percentiles (usec): 00:53:05.598 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12125], 00:53:05.598 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:53:05.598 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:53:05.598 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14615], 99.95th=[14615], 00:53:05.598 | 99.99th=[14615] 00:53:05.598 bw ( KiB/s): min=29952, max=31488, per=33.36%, avg=30959.26, stdev=517.17, samples=19 00:53:05.598 iops : min= 234, max= 246, avg=241.84, stdev= 4.06, samples=19 00:53:05.598 lat (msec) : 10=0.12%, 20=99.88% 00:53:05.598 cpu : usr=91.49%, sys=7.97%, ctx=19, majf=0, minf=0 00:53:05.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:53:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:05.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:05.598 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:53:05.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:53:05.598 filename0: (groupid=0, jobs=1): err= 0: pid=83385: Mon Dec 9 05:43:52 2024 00:53:05.598 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10004msec) 00:53:05.598 slat (nsec): min=4794, max=45582, avg=13428.38, stdev=3764.98 00:53:05.598 clat (usec): min=8674, max=14486, avg=12382.01, stdev=452.86 00:53:05.598 lat (usec): min=8686, max=14502, avg=12395.44, stdev=453.01 00:53:05.598 clat percentiles (usec): 00:53:05.598 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12125], 00:53:05.598 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:53:05.598 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:53:05.598 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:53:05.598 | 99.99th=[14484] 00:53:05.598 bw ( KiB/s): min=30658, max=31488, per=33.31%, avg=30918.84, stdev=349.74, samples=19 00:53:05.598 iops : min= 239, max= 246, avg=241.53, stdev= 2.76, samples=19 00:53:05.598 lat (msec) : 10=0.12%, 20=99.88% 00:53:05.598 cpu : usr=91.59%, sys=7.90%, ctx=12, majf=0, minf=0 00:53:05.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:53:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:05.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:05.598 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:53:05.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:53:05.599 filename0: (groupid=0, jobs=1): err= 0: pid=83386: Mon Dec 9 05:43:52 2024 00:53:05.599 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10004msec) 00:53:05.599 slat (nsec): min=4907, max=54307, avg=13673.60, stdev=4178.73 00:53:05.599 clat (usec): min=8637, max=14480, avg=12380.83, stdev=451.02 00:53:05.599 lat (usec): min=8649, max=14494, avg=12394.50, stdev=451.17 00:53:05.599 clat percentiles (usec): 00:53:05.599 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12125], 00:53:05.599 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:53:05.599 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:53:05.599 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:53:05.599 | 99.99th=[14484] 00:53:05.599 bw ( KiB/s): min=30658, max=31488, per=33.31%, avg=30918.84, stdev=349.74, samples=19 00:53:05.599 iops : min= 239, max= 246, avg=241.53, stdev= 2.76, samples=19 00:53:05.599 lat (msec) : 10=0.12%, 20=99.88% 00:53:05.599 cpu : usr=91.94%, sys=7.56%, ctx=17, majf=0, minf=0 00:53:05.599 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:53:05.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:05.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:05.599 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:53:05.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:53:05.599 00:53:05.599 Run status group 0 (all jobs): 00:53:05.599 READ: bw=90.6MiB/s (95.0MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=907MiB (951MB), run=10001-10004msec 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:53:05.858 ************************************ 00:53:05.858 END TEST fio_dif_digest 00:53:05.858 ************************************ 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:05.858 00:53:05.858 real 0m10.983s 00:53:05.858 user 0m28.208s 00:53:05.858 sys 0m2.559s 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:05.858 05:43:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:53:05.858 05:43:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:53:05.858 05:43:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:53:05.858 05:43:52 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:05.858 05:43:52 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:06.116 rmmod nvme_tcp 00:53:06.116 rmmod nvme_fabrics 00:53:06.116 rmmod nvme_keyring 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82566 ']' 00:53:06.116 05:43:52 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82566 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82566 ']' 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82566 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82566 00:53:06.116 killing process with pid 82566 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82566' 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82566 00:53:06.116 05:43:52 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82566 00:53:06.374 05:43:53 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:53:06.374 05:43:53 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:53:06.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:06.631 Waiting for block devices as requested 00:53:06.631 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:53:06.631 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:06.888 05:43:53 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:07.146 05:43:53 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:07.146 05:43:53 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:07.146 05:43:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:07.146 05:43:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:53:07.146 05:43:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:07.146 05:43:53 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:53:07.146 00:53:07.146 real 1m7.119s 00:53:07.146 user 5m2.063s 00:53:07.146 sys 0m21.937s 00:53:07.146 05:43:53 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:07.146 05:43:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:53:07.146 ************************************ 00:53:07.146 END TEST nvmf_dif 00:53:07.146 ************************************ 00:53:07.146 05:43:53 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:53:07.146 05:43:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:07.146 05:43:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:07.146 05:43:53 -- common/autotest_common.sh@10 -- # set +x 00:53:07.146 ************************************ 00:53:07.146 START TEST nvmf_abort_qd_sizes 00:53:07.146 ************************************ 00:53:07.146 05:43:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:53:07.146 * Looking for test storage... 00:53:07.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:07.146 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:53:07.146 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:53:07.146 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:07.405 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:53:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.406 --rc genhtml_branch_coverage=1 00:53:07.406 --rc genhtml_function_coverage=1 00:53:07.406 --rc genhtml_legend=1 00:53:07.406 --rc geninfo_all_blocks=1 00:53:07.406 --rc geninfo_unexecuted_blocks=1 00:53:07.406 00:53:07.406 ' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:53:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.406 --rc genhtml_branch_coverage=1 00:53:07.406 --rc genhtml_function_coverage=1 00:53:07.406 --rc genhtml_legend=1 00:53:07.406 --rc geninfo_all_blocks=1 00:53:07.406 --rc geninfo_unexecuted_blocks=1 00:53:07.406 00:53:07.406 ' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:53:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.406 --rc genhtml_branch_coverage=1 00:53:07.406 --rc genhtml_function_coverage=1 00:53:07.406 --rc genhtml_legend=1 00:53:07.406 --rc geninfo_all_blocks=1 00:53:07.406 --rc geninfo_unexecuted_blocks=1 00:53:07.406 00:53:07.406 ' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:53:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.406 --rc genhtml_branch_coverage=1 00:53:07.406 --rc genhtml_function_coverage=1 00:53:07.406 --rc genhtml_legend=1 00:53:07.406 --rc geninfo_all_blocks=1 00:53:07.406 --rc geninfo_unexecuted_blocks=1 00:53:07.406 00:53:07.406 ' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:07.406 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:07.406 Cannot find device "nvmf_init_br" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:07.406 Cannot find device "nvmf_init_br2" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:07.406 Cannot find device "nvmf_tgt_br" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:07.406 Cannot find device "nvmf_tgt_br2" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:07.406 Cannot find device "nvmf_init_br" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:07.406 Cannot find device "nvmf_init_br2" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:07.406 Cannot find device "nvmf_tgt_br" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:07.406 Cannot find device "nvmf_tgt_br2" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:07.406 Cannot find device "nvmf_br" 00:53:07.406 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:53:07.407 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:07.407 Cannot find device "nvmf_init_if" 00:53:07.407 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:53:07.407 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:07.407 Cannot find device "nvmf_init_if2" 00:53:07.407 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:53:07.407 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:07.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:07.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:07.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:07.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:53:07.666 00:53:07.666 --- 10.0.0.3 ping statistics --- 00:53:07.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:07.666 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:07.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:07.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:53:07.666 00:53:07.666 --- 10.0.0.4 ping statistics --- 00:53:07.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:07.666 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:07.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:07.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:53:07.666 00:53:07.666 --- 10.0.0.1 ping statistics --- 00:53:07.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:07.666 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:07.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:07.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:53:07.666 00:53:07.666 --- 10.0.0.2 ping statistics --- 00:53:07.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:07.666 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:53:07.666 05:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:53:08.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:08.604 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:53:08.604 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84034 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84034 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84034 ']' 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:08.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:08.604 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:53:08.862 [2024-12-09 05:43:55.577782] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:53:08.862 [2024-12-09 05:43:55.577867] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:08.862 [2024-12-09 05:43:55.736141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:08.862 [2024-12-09 05:43:55.777628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:08.862 [2024-12-09 05:43:55.777689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:08.862 [2024-12-09 05:43:55.777703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:08.862 [2024-12-09 05:43:55.777713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:08.862 [2024-12-09 05:43:55.777722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:08.862 [2024-12-09 05:43:55.778708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:08.862 [2024-12-09 05:43:55.778849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:08.862 [2024-12-09 05:43:55.778972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:08.862 [2024-12-09 05:43:55.778974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:08.862 [2024-12-09 05:43:55.816446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:53:09.120 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:09.121 05:43:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:53:09.121 ************************************ 00:53:09.121 START TEST spdk_target_abort 00:53:09.121 ************************************ 00:53:09.121 05:43:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:53:09.121 05:43:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:53:09.121 05:43:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:53:09.121 05:43:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.121 05:43:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.121 spdk_targetn1 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.121 [2024-12-09 05:43:56.029886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.121 [2024-12-09 05:43:56.069036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:53:09.121 05:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:13.306 Initializing NVMe Controllers 00:53:13.306 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:53:13.306 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:53:13.306 Initialization complete. Launching workers. 00:53:13.306 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10099, failed: 0 00:53:13.306 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1022, failed to submit 9077 00:53:13.306 success 742, unsuccessful 280, failed 0 00:53:13.306 05:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:53:13.306 05:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:16.590 Initializing NVMe Controllers 00:53:16.590 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:53:16.590 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:53:16.590 Initialization complete. Launching workers. 00:53:16.590 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8967, failed: 0 00:53:16.590 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1178, failed to submit 7789 00:53:16.590 success 378, unsuccessful 800, failed 0 00:53:16.590 05:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:53:16.590 05:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:19.923 Initializing NVMe Controllers 00:53:19.923 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:53:19.923 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:53:19.923 Initialization complete. Launching workers. 00:53:19.923 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31181, failed: 0 00:53:19.923 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2267, failed to submit 28914 00:53:19.923 success 416, unsuccessful 1851, failed 0 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84034 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84034 ']' 00:53:19.923 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84034 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84034 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:19.924 killing process with pid 84034 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84034' 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84034 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84034 00:53:19.924 00:53:19.924 real 0m10.882s 00:53:19.924 user 0m41.237s 00:53:19.924 sys 0m2.224s 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:19.924 ************************************ 00:53:19.924 END TEST spdk_target_abort 00:53:19.924 ************************************ 00:53:19.924 05:44:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:53:19.924 05:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:19.924 05:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:19.924 05:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:53:19.924 ************************************ 00:53:19.924 START TEST kernel_target_abort 00:53:19.924 ************************************ 00:53:19.924 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:53:20.183 05:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:53:20.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:20.443 Waiting for block devices as requested 00:53:20.443 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:53:20.702 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:53:20.702 No valid GPT data, bailing 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:53:20.702 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:53:20.703 No valid GPT data, bailing 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:53:20.703 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:53:20.962 No valid GPT data, bailing 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:53:20.962 No valid GPT data, bailing 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:53:20.962 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 --hostid=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 -a 10.0.0.1 -t tcp -s 4420 00:53:20.963 00:53:20.963 Discovery Log Number of Records 2, Generation counter 2 00:53:20.963 =====Discovery Log Entry 0====== 00:53:20.963 trtype: tcp 00:53:20.963 adrfam: ipv4 00:53:20.963 subtype: current discovery subsystem 00:53:20.963 treq: not specified, sq flow control disable supported 00:53:20.963 portid: 1 00:53:20.963 trsvcid: 4420 00:53:20.963 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:53:20.963 traddr: 10.0.0.1 00:53:20.963 eflags: none 00:53:20.963 sectype: none 00:53:20.963 =====Discovery Log Entry 1====== 00:53:20.963 trtype: tcp 00:53:20.963 adrfam: ipv4 00:53:20.963 subtype: nvme subsystem 00:53:20.963 treq: not specified, sq flow control disable supported 00:53:20.963 portid: 1 00:53:20.963 trsvcid: 4420 00:53:20.963 subnqn: nqn.2016-06.io.spdk:testnqn 00:53:20.963 traddr: 10.0.0.1 00:53:20.963 eflags: none 00:53:20.963 sectype: none 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:53:20.963 05:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:24.252 Initializing NVMe Controllers 00:53:24.252 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:53:24.252 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:53:24.252 Initialization complete. Launching workers. 00:53:24.252 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32764, failed: 0 00:53:24.252 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32764, failed to submit 0 00:53:24.252 success 0, unsuccessful 32764, failed 0 00:53:24.252 05:44:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:53:24.252 05:44:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:27.541 Initializing NVMe Controllers 00:53:27.541 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:53:27.541 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:53:27.541 Initialization complete. Launching workers. 00:53:27.541 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64531, failed: 0 00:53:27.541 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25804, failed to submit 38727 00:53:27.541 success 0, unsuccessful 25804, failed 0 00:53:27.541 05:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:53:27.541 05:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:53:30.828 Initializing NVMe Controllers 00:53:30.828 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:53:30.828 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:53:30.828 Initialization complete. Launching workers. 00:53:30.828 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68452, failed: 0 00:53:30.828 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17108, failed to submit 51344 00:53:30.828 success 0, unsuccessful 17108, failed 0 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:53:30.828 05:44:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:53:31.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:32.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:53:32.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:53:32.776 00:53:32.776 real 0m12.721s 00:53:32.776 user 0m5.805s 00:53:32.776 sys 0m4.245s 00:53:32.776 05:44:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:32.776 05:44:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:53:32.776 ************************************ 00:53:32.776 END TEST kernel_target_abort 00:53:32.776 ************************************ 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:32.776 rmmod nvme_tcp 00:53:32.776 rmmod nvme_fabrics 00:53:32.776 rmmod nvme_keyring 00:53:32.776 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84034 ']' 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84034 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84034 ']' 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84034 00:53:33.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84034) - No such process 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84034 is not found' 00:53:33.036 Process with pid 84034 is not found 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:53:33.036 05:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:53:33.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:33.296 Waiting for block devices as requested 00:53:33.296 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:53:33.296 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:33.556 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:53:33.816 00:53:33.816 real 0m26.596s 00:53:33.816 user 0m48.241s 00:53:33.816 sys 0m7.919s 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:33.816 05:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:53:33.816 ************************************ 00:53:33.816 END TEST nvmf_abort_qd_sizes 00:53:33.816 ************************************ 00:53:33.816 05:44:20 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:53:33.816 05:44:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:33.816 05:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:33.816 05:44:20 -- common/autotest_common.sh@10 -- # set +x 00:53:33.816 ************************************ 00:53:33.816 START TEST keyring_file 00:53:33.816 ************************************ 00:53:33.816 05:44:20 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:53:33.816 * Looking for test storage... 00:53:33.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:53:33.816 05:44:20 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:53:33.816 05:44:20 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:53:33.816 05:44:20 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:53:34.077 05:44:20 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:53:34.077 05:44:20 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:34.077 05:44:20 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:53:34.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:34.077 --rc genhtml_branch_coverage=1 00:53:34.077 --rc genhtml_function_coverage=1 00:53:34.077 --rc genhtml_legend=1 00:53:34.077 --rc geninfo_all_blocks=1 00:53:34.077 --rc geninfo_unexecuted_blocks=1 00:53:34.077 00:53:34.077 ' 00:53:34.077 05:44:20 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:53:34.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:34.077 --rc genhtml_branch_coverage=1 00:53:34.077 --rc genhtml_function_coverage=1 00:53:34.077 --rc genhtml_legend=1 00:53:34.077 --rc geninfo_all_blocks=1 00:53:34.077 --rc geninfo_unexecuted_blocks=1 00:53:34.077 00:53:34.077 ' 00:53:34.077 05:44:20 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:53:34.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:34.077 --rc genhtml_branch_coverage=1 00:53:34.077 --rc genhtml_function_coverage=1 00:53:34.077 --rc genhtml_legend=1 00:53:34.077 --rc geninfo_all_blocks=1 00:53:34.077 --rc geninfo_unexecuted_blocks=1 00:53:34.077 00:53:34.077 ' 00:53:34.077 05:44:20 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:53:34.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:34.077 --rc genhtml_branch_coverage=1 00:53:34.077 --rc genhtml_function_coverage=1 00:53:34.077 --rc genhtml_legend=1 00:53:34.077 --rc geninfo_all_blocks=1 00:53:34.077 --rc geninfo_unexecuted_blocks=1 00:53:34.077 00:53:34.077 ' 00:53:34.077 05:44:20 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:53:34.077 05:44:20 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:34.077 05:44:20 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:34.077 05:44:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:34.078 05:44:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:34.078 05:44:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:34.078 05:44:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:34.078 05:44:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:53:34.078 05:44:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:34.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.slRc3JA7xh 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.slRc3JA7xh 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.slRc3JA7xh 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.slRc3JA7xh 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t10zXKwLHK 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:53:34.078 05:44:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t10zXKwLHK 00:53:34.078 05:44:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t10zXKwLHK 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.t10zXKwLHK 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=84940 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:34.078 05:44:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84940 00:53:34.078 05:44:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84940 ']' 00:53:34.078 05:44:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:34.078 05:44:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:34.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:34.078 05:44:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:34.078 05:44:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:34.078 05:44:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:53:34.078 [2024-12-09 05:44:21.045608] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:53:34.078 [2024-12-09 05:44:21.045709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84940 ] 00:53:34.338 [2024-12-09 05:44:21.200826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:34.338 [2024-12-09 05:44:21.240722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:34.338 [2024-12-09 05:44:21.288782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:53:34.597 05:44:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:53:34.597 [2024-12-09 05:44:21.435792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:34.597 null0 00:53:34.597 [2024-12-09 05:44:21.467764] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:34.597 [2024-12-09 05:44:21.467942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:34.597 05:44:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:53:34.597 [2024-12-09 05:44:21.495768] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:53:34.597 request: 00:53:34.597 { 00:53:34.597 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:53:34.597 "secure_channel": false, 00:53:34.597 "listen_address": { 00:53:34.597 "trtype": "tcp", 00:53:34.597 "traddr": "127.0.0.1", 00:53:34.597 "trsvcid": "4420" 00:53:34.597 }, 00:53:34.597 "method": "nvmf_subsystem_add_listener", 00:53:34.597 "req_id": 1 00:53:34.597 } 00:53:34.597 Got JSON-RPC error response 00:53:34.597 response: 00:53:34.597 { 00:53:34.597 "code": -32602, 00:53:34.597 "message": "Invalid parameters" 00:53:34.597 } 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:34.597 05:44:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=84945 00:53:34.597 05:44:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84945 /var/tmp/bperf.sock 00:53:34.597 05:44:21 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84945 ']' 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:34.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:34.597 05:44:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:53:34.597 [2024-12-09 05:44:21.563968] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:53:34.597 [2024-12-09 05:44:21.564079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84945 ] 00:53:34.856 [2024-12-09 05:44:21.719200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:34.856 [2024-12-09 05:44:21.761471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:34.856 [2024-12-09 05:44:21.795576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:35.115 05:44:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:35.115 05:44:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:53:35.115 05:44:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:35.115 05:44:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:35.373 05:44:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t10zXKwLHK 00:53:35.373 05:44:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t10zXKwLHK 00:53:35.631 05:44:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:53:35.631 05:44:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:53:35.631 05:44:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:35.631 05:44:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:35.631 05:44:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:35.889 05:44:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.slRc3JA7xh == \/\t\m\p\/\t\m\p\.\s\l\R\c\3\J\A\7\x\h ]] 00:53:35.889 05:44:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:53:35.889 05:44:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:53:35.889 05:44:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:35.889 05:44:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:35.889 05:44:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:53:36.146 05:44:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.t10zXKwLHK == \/\t\m\p\/\t\m\p\.\t\1\0\z\X\K\w\L\H\K ]] 00:53:36.146 05:44:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:53:36.146 05:44:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:36.146 05:44:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:36.146 05:44:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:36.146 05:44:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:36.146 05:44:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:36.405 05:44:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:53:36.405 05:44:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:53:36.405 05:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:36.405 05:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:53:36.405 05:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:36.405 05:44:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:36.405 05:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:53:36.405 05:44:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:53:36.405 05:44:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:36.405 05:44:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:36.663 [2024-12-09 05:44:23.566205] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:36.922 nvme0n1 00:53:36.922 05:44:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:36.922 05:44:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:53:36.922 05:44:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:36.922 05:44:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:37.180 05:44:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:53:37.180 05:44:24 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:37.438 Running I/O for 1 seconds... 00:53:38.375 13336.00 IOPS, 52.09 MiB/s 00:53:38.375 Latency(us) 00:53:38.375 [2024-12-09T05:44:25.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:38.375 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:53:38.375 nvme0n1 : 1.01 13391.48 52.31 0.00 0.00 9533.85 4051.32 21567.30 00:53:38.375 [2024-12-09T05:44:25.347Z] =================================================================================================================== 00:53:38.375 [2024-12-09T05:44:25.347Z] Total : 13391.48 52.31 0.00 0.00 9533.85 4051.32 21567.30 00:53:38.375 { 00:53:38.375 "results": [ 00:53:38.375 { 00:53:38.375 "job": "nvme0n1", 00:53:38.375 "core_mask": "0x2", 00:53:38.375 "workload": "randrw", 00:53:38.375 "percentage": 50, 00:53:38.375 "status": "finished", 00:53:38.375 "queue_depth": 128, 00:53:38.375 "io_size": 4096, 00:53:38.375 "runtime": 1.005565, 00:53:38.375 "iops": 13391.476433646756, 00:53:38.375 "mibps": 52.31045481893264, 00:53:38.375 "io_failed": 0, 00:53:38.375 "io_timeout": 0, 00:53:38.375 "avg_latency_us": 9533.848560279763, 00:53:38.375 "min_latency_us": 4051.316363636364, 00:53:38.375 "max_latency_us": 21567.30181818182 00:53:38.375 } 00:53:38.375 ], 00:53:38.375 "core_count": 1 00:53:38.375 } 00:53:38.375 05:44:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:53:38.375 05:44:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:53:38.633 05:44:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:53:38.633 05:44:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:38.633 05:44:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:38.633 05:44:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:38.633 05:44:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:38.633 05:44:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:38.891 05:44:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:53:38.891 05:44:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:53:38.891 05:44:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:53:38.891 05:44:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:38.891 05:44:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:38.891 05:44:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:53:38.891 05:44:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:39.149 05:44:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:53:39.150 05:44:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:39.150 05:44:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:53:39.150 05:44:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:53:39.408 [2024-12-09 05:44:26.302187] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:53:39.408 [2024-12-09 05:44:26.302342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d835d0 (107): Transport endpoint is not connected 00:53:39.408 [2024-12-09 05:44:26.303335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d835d0 (9): Bad file descriptor 00:53:39.408 [2024-12-09 05:44:26.304332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:53:39.408 [2024-12-09 05:44:26.304362] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:53:39.408 [2024-12-09 05:44:26.304373] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:53:39.408 [2024-12-09 05:44:26.304382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:53:39.408 request: 00:53:39.408 { 00:53:39.408 "name": "nvme0", 00:53:39.408 "trtype": "tcp", 00:53:39.408 "traddr": "127.0.0.1", 00:53:39.408 "adrfam": "ipv4", 00:53:39.408 "trsvcid": "4420", 00:53:39.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:39.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:53:39.408 "prchk_reftag": false, 00:53:39.408 "prchk_guard": false, 00:53:39.408 "hdgst": false, 00:53:39.408 "ddgst": false, 00:53:39.408 "psk": "key1", 00:53:39.408 "allow_unrecognized_csi": false, 00:53:39.408 "method": "bdev_nvme_attach_controller", 00:53:39.408 "req_id": 1 00:53:39.408 } 00:53:39.408 Got JSON-RPC error response 00:53:39.408 response: 00:53:39.408 { 00:53:39.408 "code": -5, 00:53:39.408 "message": "Input/output error" 00:53:39.408 } 00:53:39.408 05:44:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:53:39.408 05:44:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:39.408 05:44:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:39.408 05:44:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:39.408 05:44:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:53:39.408 05:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:39.408 05:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:39.408 05:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:39.408 05:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:39.408 05:44:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:39.667 05:44:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:53:39.667 05:44:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:53:39.667 05:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:53:39.667 05:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:39.667 05:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:53:39.667 05:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:39.667 05:44:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:39.925 05:44:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:53:39.925 05:44:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:53:39.925 05:44:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:53:40.184 05:44:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:53:40.184 05:44:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:53:40.443 05:44:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:53:40.443 05:44:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:53:40.443 05:44:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:40.703 05:44:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:53:40.703 05:44:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.slRc3JA7xh 00:53:40.703 05:44:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.703 05:44:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:40.703 05:44:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:40.963 [2024-12-09 05:44:27.776839] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.slRc3JA7xh': 0100660 00:53:40.963 [2024-12-09 05:44:27.776875] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:53:40.963 request: 00:53:40.963 { 00:53:40.963 "name": "key0", 00:53:40.963 "path": "/tmp/tmp.slRc3JA7xh", 00:53:40.963 "method": "keyring_file_add_key", 00:53:40.963 "req_id": 1 00:53:40.963 } 00:53:40.963 Got JSON-RPC error response 00:53:40.963 response: 00:53:40.963 { 00:53:40.963 "code": -1, 00:53:40.963 "message": "Operation not permitted" 00:53:40.963 } 00:53:40.963 05:44:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:53:40.963 05:44:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:40.963 05:44:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:40.963 05:44:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:40.963 05:44:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.slRc3JA7xh 00:53:40.963 05:44:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:40.963 05:44:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.slRc3JA7xh 00:53:41.223 05:44:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.slRc3JA7xh 00:53:41.223 05:44:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:53:41.223 05:44:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:41.223 05:44:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:41.223 05:44:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:41.223 05:44:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:41.223 05:44:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:41.483 05:44:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:53:41.483 05:44:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:41.483 05:44:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:41.483 05:44:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:41.742 [2024-12-09 05:44:28.523867] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.slRc3JA7xh': No such file or directory 00:53:41.743 [2024-12-09 05:44:28.523906] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:53:41.743 [2024-12-09 05:44:28.523941] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:53:41.743 [2024-12-09 05:44:28.523949] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:53:41.743 [2024-12-09 05:44:28.523958] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:53:41.743 [2024-12-09 05:44:28.523980] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:53:41.743 request: 00:53:41.743 { 00:53:41.743 "name": "nvme0", 00:53:41.743 "trtype": "tcp", 00:53:41.743 "traddr": "127.0.0.1", 00:53:41.743 "adrfam": "ipv4", 00:53:41.743 "trsvcid": "4420", 00:53:41.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:41.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:53:41.743 "prchk_reftag": false, 00:53:41.743 "prchk_guard": false, 00:53:41.743 "hdgst": false, 00:53:41.743 "ddgst": false, 00:53:41.743 "psk": "key0", 00:53:41.743 "allow_unrecognized_csi": false, 00:53:41.743 "method": "bdev_nvme_attach_controller", 00:53:41.743 "req_id": 1 00:53:41.743 } 00:53:41.743 Got JSON-RPC error response 00:53:41.743 response: 00:53:41.743 { 00:53:41.743 "code": -19, 00:53:41.743 "message": "No such device" 00:53:41.743 } 00:53:41.743 05:44:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:53:41.743 05:44:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:41.743 05:44:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:41.743 05:44:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:41.743 05:44:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:53:41.743 05:44:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:53:42.002 05:44:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JGHD25pASI 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:53:42.002 05:44:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:53:42.002 05:44:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:53:42.002 05:44:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:42.002 05:44:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:53:42.002 05:44:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:53:42.002 05:44:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JGHD25pASI 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JGHD25pASI 00:53:42.002 05:44:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.JGHD25pASI 00:53:42.002 05:44:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JGHD25pASI 00:53:42.002 05:44:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JGHD25pASI 00:53:42.262 05:44:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:42.262 05:44:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:42.522 nvme0n1 00:53:42.522 05:44:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:53:42.522 05:44:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:42.522 05:44:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:42.522 05:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:42.522 05:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:42.522 05:44:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:42.781 05:44:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:53:42.781 05:44:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:53:42.781 05:44:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:53:43.040 05:44:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:53:43.040 05:44:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:53:43.040 05:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:43.040 05:44:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:43.040 05:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:43.299 05:44:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:53:43.299 05:44:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:53:43.299 05:44:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:43.299 05:44:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:43.299 05:44:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:43.299 05:44:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:43.299 05:44:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:43.558 05:44:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:53:43.558 05:44:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:53:43.558 05:44:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:53:43.818 05:44:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:53:43.818 05:44:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:53:43.818 05:44:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:44.077 05:44:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:53:44.077 05:44:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JGHD25pASI 00:53:44.077 05:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JGHD25pASI 00:53:44.335 05:44:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t10zXKwLHK 00:53:44.335 05:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t10zXKwLHK 00:53:44.593 05:44:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:44.593 05:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:53:45.160 nvme0n1 00:53:45.160 05:44:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:53:45.160 05:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:53:45.420 05:44:32 keyring_file -- keyring/file.sh@113 -- # config='{ 00:53:45.420 "subsystems": [ 00:53:45.420 { 00:53:45.420 "subsystem": "keyring", 00:53:45.420 "config": [ 00:53:45.420 { 00:53:45.420 "method": "keyring_file_add_key", 00:53:45.420 "params": { 00:53:45.420 "name": "key0", 00:53:45.420 "path": "/tmp/tmp.JGHD25pASI" 00:53:45.420 } 00:53:45.420 }, 00:53:45.420 { 00:53:45.420 "method": "keyring_file_add_key", 00:53:45.420 "params": { 00:53:45.420 "name": "key1", 00:53:45.420 "path": "/tmp/tmp.t10zXKwLHK" 00:53:45.420 } 00:53:45.420 } 00:53:45.420 ] 00:53:45.420 }, 00:53:45.420 { 00:53:45.420 "subsystem": "iobuf", 00:53:45.420 "config": [ 00:53:45.420 { 00:53:45.420 "method": "iobuf_set_options", 00:53:45.420 "params": { 00:53:45.420 "small_pool_count": 8192, 00:53:45.420 "large_pool_count": 1024, 00:53:45.420 "small_bufsize": 8192, 00:53:45.420 "large_bufsize": 135168, 00:53:45.420 "enable_numa": false 00:53:45.420 } 00:53:45.420 } 00:53:45.420 ] 00:53:45.420 }, 00:53:45.420 { 00:53:45.420 "subsystem": "sock", 00:53:45.420 "config": [ 00:53:45.420 { 00:53:45.420 "method": "sock_set_default_impl", 00:53:45.420 "params": { 00:53:45.420 "impl_name": "uring" 00:53:45.420 } 00:53:45.420 }, 00:53:45.420 { 00:53:45.420 "method": "sock_impl_set_options", 00:53:45.420 "params": { 00:53:45.420 "impl_name": "ssl", 00:53:45.420 "recv_buf_size": 4096, 00:53:45.420 "send_buf_size": 4096, 00:53:45.420 "enable_recv_pipe": true, 00:53:45.420 "enable_quickack": false, 00:53:45.420 "enable_placement_id": 0, 00:53:45.420 "enable_zerocopy_send_server": true, 00:53:45.420 "enable_zerocopy_send_client": false, 00:53:45.420 "zerocopy_threshold": 0, 00:53:45.420 "tls_version": 0, 00:53:45.420 "enable_ktls": false 00:53:45.420 } 00:53:45.420 }, 00:53:45.420 { 00:53:45.420 "method": "sock_impl_set_options", 00:53:45.420 "params": { 00:53:45.420 "impl_name": "posix", 00:53:45.420 "recv_buf_size": 2097152, 00:53:45.420 "send_buf_size": 2097152, 00:53:45.420 "enable_recv_pipe": true, 00:53:45.420 "enable_quickack": false, 00:53:45.420 "enable_placement_id": 0, 00:53:45.420 "enable_zerocopy_send_server": true, 00:53:45.420 "enable_zerocopy_send_client": false, 00:53:45.420 "zerocopy_threshold": 0, 00:53:45.420 "tls_version": 0, 00:53:45.420 "enable_ktls": false 00:53:45.420 } 00:53:45.420 }, 00:53:45.420 { 00:53:45.420 "method": "sock_impl_set_options", 00:53:45.420 "params": { 00:53:45.420 "impl_name": "uring", 00:53:45.420 "recv_buf_size": 2097152, 00:53:45.420 "send_buf_size": 2097152, 00:53:45.420 "enable_recv_pipe": true, 00:53:45.420 "enable_quickack": false, 00:53:45.420 "enable_placement_id": 0, 00:53:45.420 "enable_zerocopy_send_server": false, 00:53:45.420 "enable_zerocopy_send_client": false, 00:53:45.421 "zerocopy_threshold": 0, 00:53:45.421 "tls_version": 0, 00:53:45.421 "enable_ktls": false 00:53:45.421 } 00:53:45.421 } 00:53:45.421 ] 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "subsystem": "vmd", 00:53:45.421 "config": [] 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "subsystem": "accel", 00:53:45.421 "config": [ 00:53:45.421 { 00:53:45.421 "method": "accel_set_options", 00:53:45.421 "params": { 00:53:45.421 "small_cache_size": 128, 00:53:45.421 "large_cache_size": 16, 00:53:45.421 "task_count": 2048, 00:53:45.421 "sequence_count": 2048, 00:53:45.421 "buf_count": 2048 00:53:45.421 } 00:53:45.421 } 00:53:45.421 ] 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "subsystem": "bdev", 00:53:45.421 "config": [ 00:53:45.421 { 00:53:45.421 "method": "bdev_set_options", 00:53:45.421 "params": { 00:53:45.421 "bdev_io_pool_size": 65535, 00:53:45.421 "bdev_io_cache_size": 256, 00:53:45.421 "bdev_auto_examine": true, 00:53:45.421 "iobuf_small_cache_size": 128, 00:53:45.421 "iobuf_large_cache_size": 16 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "bdev_raid_set_options", 00:53:45.421 "params": { 00:53:45.421 "process_window_size_kb": 1024, 00:53:45.421 "process_max_bandwidth_mb_sec": 0 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "bdev_iscsi_set_options", 00:53:45.421 "params": { 00:53:45.421 "timeout_sec": 30 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "bdev_nvme_set_options", 00:53:45.421 "params": { 00:53:45.421 "action_on_timeout": "none", 00:53:45.421 "timeout_us": 0, 00:53:45.421 "timeout_admin_us": 0, 00:53:45.421 "keep_alive_timeout_ms": 10000, 00:53:45.421 "arbitration_burst": 0, 00:53:45.421 "low_priority_weight": 0, 00:53:45.421 "medium_priority_weight": 0, 00:53:45.421 "high_priority_weight": 0, 00:53:45.421 "nvme_adminq_poll_period_us": 10000, 00:53:45.421 "nvme_ioq_poll_period_us": 0, 00:53:45.421 "io_queue_requests": 512, 00:53:45.421 "delay_cmd_submit": true, 00:53:45.421 "transport_retry_count": 4, 00:53:45.421 "bdev_retry_count": 3, 00:53:45.421 "transport_ack_timeout": 0, 00:53:45.421 "ctrlr_loss_timeout_sec": 0, 00:53:45.421 "reconnect_delay_sec": 0, 00:53:45.421 "fast_io_fail_timeout_sec": 0, 00:53:45.421 "disable_auto_failback": false, 00:53:45.421 "generate_uuids": false, 00:53:45.421 "transport_tos": 0, 00:53:45.421 "nvme_error_stat": false, 00:53:45.421 "rdma_srq_size": 0, 00:53:45.421 "io_path_stat": false, 00:53:45.421 "allow_accel_sequence": false, 00:53:45.421 "rdma_max_cq_size": 0, 00:53:45.421 "rdma_cm_event_timeout_ms": 0, 00:53:45.421 "dhchap_digests": [ 00:53:45.421 "sha256", 00:53:45.421 "sha384", 00:53:45.421 "sha512" 00:53:45.421 ], 00:53:45.421 "dhchap_dhgroups": [ 00:53:45.421 "null", 00:53:45.421 "ffdhe2048", 00:53:45.421 "ffdhe3072", 00:53:45.421 "ffdhe4096", 00:53:45.421 "ffdhe6144", 00:53:45.421 "ffdhe8192" 00:53:45.421 ] 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "bdev_nvme_attach_controller", 00:53:45.421 "params": { 00:53:45.421 "name": "nvme0", 00:53:45.421 "trtype": "TCP", 00:53:45.421 "adrfam": "IPv4", 00:53:45.421 "traddr": "127.0.0.1", 00:53:45.421 "trsvcid": "4420", 00:53:45.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:45.421 "prchk_reftag": false, 00:53:45.421 "prchk_guard": false, 00:53:45.421 "ctrlr_loss_timeout_sec": 0, 00:53:45.421 "reconnect_delay_sec": 0, 00:53:45.421 "fast_io_fail_timeout_sec": 0, 00:53:45.421 "psk": "key0", 00:53:45.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:53:45.421 "hdgst": false, 00:53:45.421 "ddgst": false, 00:53:45.421 "multipath": "multipath" 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "bdev_nvme_set_hotplug", 00:53:45.421 "params": { 00:53:45.421 "period_us": 100000, 00:53:45.421 "enable": false 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "bdev_wait_for_examine" 00:53:45.421 } 00:53:45.421 ] 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "subsystem": "nbd", 00:53:45.421 "config": [] 00:53:45.421 } 00:53:45.421 ] 00:53:45.421 }' 00:53:45.421 05:44:32 keyring_file -- keyring/file.sh@115 -- # killprocess 84945 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84945 ']' 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84945 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84945 00:53:45.421 killing process with pid 84945 00:53:45.421 Received shutdown signal, test time was about 1.000000 seconds 00:53:45.421 00:53:45.421 Latency(us) 00:53:45.421 [2024-12-09T05:44:32.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:45.421 [2024-12-09T05:44:32.393Z] =================================================================================================================== 00:53:45.421 [2024-12-09T05:44:32.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84945' 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@973 -- # kill 84945 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@978 -- # wait 84945 00:53:45.421 05:44:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=85188 00:53:45.421 05:44:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85188 /var/tmp/bperf.sock 00:53:45.421 05:44:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85188 ']' 00:53:45.421 05:44:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:53:45.421 "subsystems": [ 00:53:45.421 { 00:53:45.421 "subsystem": "keyring", 00:53:45.421 "config": [ 00:53:45.421 { 00:53:45.421 "method": "keyring_file_add_key", 00:53:45.421 "params": { 00:53:45.421 "name": "key0", 00:53:45.421 "path": "/tmp/tmp.JGHD25pASI" 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "keyring_file_add_key", 00:53:45.421 "params": { 00:53:45.421 "name": "key1", 00:53:45.421 "path": "/tmp/tmp.t10zXKwLHK" 00:53:45.421 } 00:53:45.421 } 00:53:45.421 ] 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "subsystem": "iobuf", 00:53:45.421 "config": [ 00:53:45.421 { 00:53:45.421 "method": "iobuf_set_options", 00:53:45.421 "params": { 00:53:45.421 "small_pool_count": 8192, 00:53:45.421 "large_pool_count": 1024, 00:53:45.421 "small_bufsize": 8192, 00:53:45.421 "large_bufsize": 135168, 00:53:45.421 "enable_numa": false 00:53:45.421 } 00:53:45.421 } 00:53:45.421 ] 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "subsystem": "sock", 00:53:45.421 "config": [ 00:53:45.421 { 00:53:45.421 "method": "sock_set_default_impl", 00:53:45.421 "params": { 00:53:45.421 "impl_name": "uring" 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "sock_impl_set_options", 00:53:45.421 "params": { 00:53:45.421 "impl_name": "ssl", 00:53:45.421 "recv_buf_size": 4096, 00:53:45.421 "send_buf_size": 4096, 00:53:45.421 "enable_recv_pipe": true, 00:53:45.421 "enable_quickack": false, 00:53:45.421 "enable_placement_id": 0, 00:53:45.421 "enable_zerocopy_send_server": true, 00:53:45.421 "enable_zerocopy_send_client": false, 00:53:45.421 "zerocopy_threshold": 0, 00:53:45.421 "tls_version": 0, 00:53:45.421 "enable_ktls": false 00:53:45.421 } 00:53:45.421 }, 00:53:45.421 { 00:53:45.421 "method": "sock_impl_set_options", 00:53:45.421 "params": { 00:53:45.421 "impl_name": "posix", 00:53:45.421 "recv_buf_size": 2097152, 00:53:45.421 "send_buf_size": 2097152, 00:53:45.421 "enable_recv_pipe": true, 00:53:45.421 "enable_quickack": false, 00:53:45.422 "enable_placement_id": 0, 00:53:45.422 "enable_zerocopy_send_server": true, 00:53:45.422 "enable_zerocopy_send_client": false, 00:53:45.422 "zerocopy_threshold": 0, 00:53:45.422 "tls_version": 0, 00:53:45.422 "enable_ktls": false 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "sock_impl_set_options", 00:53:45.422 "params": { 00:53:45.422 "impl_name": "uring", 00:53:45.422 "recv_buf_size": 2097152, 00:53:45.422 "send_buf_size": 2097152, 00:53:45.422 "enable_recv_pipe": true, 00:53:45.422 "enable_quickack": false, 00:53:45.422 "enable_placement_id": 0, 00:53:45.422 "enable_zerocopy_send_server": false, 00:53:45.422 "enable_zerocopy_send_client": false, 00:53:45.422 "zerocopy_threshold": 0, 00:53:45.422 "tls_version": 0, 00:53:45.422 "enable_ktls": false 00:53:45.422 } 00:53:45.422 } 00:53:45.422 ] 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "subsystem": "vmd", 00:53:45.422 "config": [] 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "subsystem": "accel", 00:53:45.422 "config": [ 00:53:45.422 { 00:53:45.422 "method": "accel_set_options", 00:53:45.422 "params": { 00:53:45.422 "small_cache_size": 128, 00:53:45.422 "large_cache_size": 16, 00:53:45.422 "task_count": 2048, 00:53:45.422 "sequence_count": 2048, 00:53:45.422 "buf_count": 2048 00:53:45.422 } 00:53:45.422 } 00:53:45.422 ] 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "subsystem": "bdev", 00:53:45.422 "config": [ 00:53:45.422 { 00:53:45.422 "method": "bdev_set_options", 00:53:45.422 "params": { 00:53:45.422 "bdev_io_pool_size": 65535, 00:53:45.422 "bdev_io_cache_size": 256, 00:53:45.422 "bdev_auto_examine": true, 00:53:45.422 "iobuf_small_cache_size": 128, 00:53:45.422 "iobuf_large_cache_size": 16 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "bdev_raid_set_options", 00:53:45.422 "params": { 00:53:45.422 "process_window_size_kb": 1024, 00:53:45.422 "process_max_bandwidth_mb_sec": 0 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "bdev_iscsi_set_options", 00:53:45.422 "params": { 00:53:45.422 "timeout_sec": 30 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "bdev_nvme_set_options", 00:53:45.422 "params": { 00:53:45.422 "action_on_timeout": "none", 00:53:45.422 "timeout_us": 0, 00:53:45.422 "timeout_admin_us": 0, 00:53:45.422 "keep_alive_timeout_ms": 10000, 00:53:45.422 "arbitration_burst": 0, 00:53:45.422 "low_priority_weight": 0, 00:53:45.422 "medium_priority_weight": 0, 00:53:45.422 "high_priority_weight": 0, 00:53:45.422 "nvme_adminq_poll_period_us": 10000, 00:53:45.422 "nvme_ioq_poll_period_us": 0, 00:53:45.422 "io_queue_requests": 512, 00:53:45.422 "delay_cmd_submit": true, 00:53:45.422 "transport_retry_count": 4, 00:53:45.422 "bdev_retry_count": 3, 00:53:45.422 "transport_ack_timeout": 0, 00:53:45.422 "ctrlr_loss_timeout_sec": 0, 00:53:45.422 "reconnect_delay_sec": 0, 00:53:45.422 "fast_io_fail_timeout_sec": 0, 00:53:45.422 "disable_auto_failback": false, 00:53:45.422 "generate_uuids": false, 00:53:45.422 "transport_tos": 0, 00:53:45.422 "nvme_error_stat": false, 00:53:45.422 "rdma_srq_size": 0, 00:53:45.422 "io_path_stat": false, 00:53:45.422 "allow_accel_sequence": false, 00:53:45.422 "rdma_max_cq_size": 0, 00:53:45.422 "rdma_cm_event_timeout_ms": 0, 00:53:45.422 "dhchap_digests": [ 00:53:45.422 "sha256", 00:53:45.422 "sha384", 00:53:45.422 "sha512" 00:53:45.422 ], 00:53:45.422 "dhchap_dhgroups": [ 00:53:45.422 "null", 00:53:45.422 "ffdhe2048", 00:53:45.422 "ffdhe3072", 00:53:45.422 "ffdhe4096", 00:53:45.422 "ffdhe6144", 00:53:45.422 "ffdhe8192" 00:53:45.422 ] 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "bdev_nvme_attach_controller", 00:53:45.422 "params": { 00:53:45.422 "name": "nvme0", 00:53:45.422 "trtype": "TCP", 00:53:45.422 "adrfam": "IPv4", 00:53:45.422 "traddr": "127.0.0.1", 00:53:45.422 "trsvcid": "4420", 00:53:45.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:45.422 "prchk_reftag": false, 00:53:45.422 "prchk_guard": false, 00:53:45.422 "ctrlr_loss_timeout_sec": 0, 00:53:45.422 "reconnect_delay_sec": 0, 00:53:45.422 "fast_io_fail_timeout_sec": 0, 00:53:45.422 "psk": "key0", 00:53:45.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:53:45.422 "hdgst": false, 00:53:45.422 "ddgst": false, 00:53:45.422 "multipath": "multipath" 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "bdev_nvme_set_hotplug", 00:53:45.422 "params": { 00:53:45.422 "period_us": 100000, 00:53:45.422 "enable": false 00:53:45.422 } 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "method": "bdev_wait_for_examine" 00:53:45.422 } 00:53:45.422 ] 00:53:45.422 }, 00:53:45.422 { 00:53:45.422 "subsystem": "nbd", 00:53:45.422 "config": [] 00:53:45.422 } 00:53:45.422 ] 00:53:45.422 }' 00:53:45.422 05:44:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:45.422 05:44:32 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:53:45.422 05:44:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:45.422 05:44:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:45.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:45.422 05:44:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:45.422 05:44:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:53:45.681 [2024-12-09 05:44:32.402071] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:53:45.681 [2024-12-09 05:44:32.402340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85188 ] 00:53:45.681 [2024-12-09 05:44:32.548312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:45.681 [2024-12-09 05:44:32.579492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:45.941 [2024-12-09 05:44:32.687979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:45.941 [2024-12-09 05:44:32.727240] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:46.510 05:44:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:46.510 05:44:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:53:46.510 05:44:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:53:46.510 05:44:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:46.510 05:44:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:53:46.769 05:44:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:53:46.769 05:44:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:53:46.769 05:44:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:46.769 05:44:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:53:46.769 05:44:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:46.769 05:44:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:53:46.769 05:44:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:47.028 05:44:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:53:47.028 05:44:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:53:47.028 05:44:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:53:47.028 05:44:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:53:47.028 05:44:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:53:47.028 05:44:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:47.028 05:44:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:47.286 05:44:34 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:53:47.286 05:44:34 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:53:47.286 05:44:34 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:53:47.287 05:44:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:53:47.545 05:44:34 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:53:47.545 05:44:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:53:47.545 05:44:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JGHD25pASI /tmp/tmp.t10zXKwLHK 00:53:47.545 05:44:34 keyring_file -- keyring/file.sh@20 -- # killprocess 85188 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85188 ']' 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85188 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@959 -- # uname 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85188 00:53:47.545 killing process with pid 85188 00:53:47.545 Received shutdown signal, test time was about 1.000000 seconds 00:53:47.545 00:53:47.545 Latency(us) 00:53:47.545 [2024-12-09T05:44:34.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:47.545 [2024-12-09T05:44:34.517Z] =================================================================================================================== 00:53:47.545 [2024-12-09T05:44:34.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:47.545 05:44:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85188' 00:53:47.546 05:44:34 keyring_file -- common/autotest_common.sh@973 -- # kill 85188 00:53:47.546 05:44:34 keyring_file -- common/autotest_common.sh@978 -- # wait 85188 00:53:47.804 05:44:34 keyring_file -- keyring/file.sh@21 -- # killprocess 84940 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84940 ']' 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84940 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@959 -- # uname 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84940 00:53:47.804 killing process with pid 84940 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84940' 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@973 -- # kill 84940 00:53:47.804 05:44:34 keyring_file -- common/autotest_common.sh@978 -- # wait 84940 00:53:48.064 ************************************ 00:53:48.064 END TEST keyring_file 00:53:48.064 ************************************ 00:53:48.064 00:53:48.064 real 0m14.271s 00:53:48.064 user 0m36.850s 00:53:48.064 sys 0m2.515s 00:53:48.064 05:44:34 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:48.064 05:44:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:53:48.064 05:44:34 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:53:48.064 05:44:34 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:53:48.064 05:44:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:48.064 05:44:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:48.064 05:44:34 -- common/autotest_common.sh@10 -- # set +x 00:53:48.064 ************************************ 00:53:48.064 START TEST keyring_linux 00:53:48.064 ************************************ 00:53:48.064 05:44:34 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:53:48.064 Joined session keyring: 484978330 00:53:48.064 * Looking for test storage... 00:53:48.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:53:48.336 05:44:35 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:53:48.336 05:44:35 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:53:48.336 05:44:35 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:53:48.336 05:44:35 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@345 -- # : 1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:48.336 05:44:35 keyring_linux -- scripts/common.sh@368 -- # return 0 00:53:48.336 05:44:35 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:48.336 05:44:35 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:53:48.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:48.336 --rc genhtml_branch_coverage=1 00:53:48.336 --rc genhtml_function_coverage=1 00:53:48.336 --rc genhtml_legend=1 00:53:48.336 --rc geninfo_all_blocks=1 00:53:48.337 --rc geninfo_unexecuted_blocks=1 00:53:48.337 00:53:48.337 ' 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:53:48.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:48.337 --rc genhtml_branch_coverage=1 00:53:48.337 --rc genhtml_function_coverage=1 00:53:48.337 --rc genhtml_legend=1 00:53:48.337 --rc geninfo_all_blocks=1 00:53:48.337 --rc geninfo_unexecuted_blocks=1 00:53:48.337 00:53:48.337 ' 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:53:48.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:48.337 --rc genhtml_branch_coverage=1 00:53:48.337 --rc genhtml_function_coverage=1 00:53:48.337 --rc genhtml_legend=1 00:53:48.337 --rc geninfo_all_blocks=1 00:53:48.337 --rc geninfo_unexecuted_blocks=1 00:53:48.337 00:53:48.337 ' 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:53:48.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:48.337 --rc genhtml_branch_coverage=1 00:53:48.337 --rc genhtml_function_coverage=1 00:53:48.337 --rc genhtml_legend=1 00:53:48.337 --rc geninfo_all_blocks=1 00:53:48.337 --rc geninfo_unexecuted_blocks=1 00:53:48.337 00:53:48.337 ' 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=9c49cfea-48fb-4faa-9fb4-07ac0d5b7535 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:48.337 05:44:35 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:53:48.337 05:44:35 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:48.337 05:44:35 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:48.337 05:44:35 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:48.337 05:44:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:48.337 05:44:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:48.337 05:44:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:48.337 05:44:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:53:48.337 05:44:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:48.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@733 -- # python - 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:53:48.337 /tmp/:spdk-test:key0 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:53:48.337 05:44:35 keyring_linux -- nvmf/common.sh@733 -- # python - 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:53:48.337 05:44:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:53:48.337 /tmp/:spdk-test:key1 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85310 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:48.337 05:44:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85310 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85310 ']' 00:53:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:48.337 05:44:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:53:48.337 [2024-12-09 05:44:35.294733] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:53:48.337 [2024-12-09 05:44:35.295036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85310 ] 00:53:48.610 [2024-12-09 05:44:35.429030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:48.610 [2024-12-09 05:44:35.456593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:48.610 [2024-12-09 05:44:35.491945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:53:49.575 05:44:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:53:49.575 [2024-12-09 05:44:36.187019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:49.575 null0 00:53:49.575 [2024-12-09 05:44:36.218975] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:49.575 [2024-12-09 05:44:36.219291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:49.575 05:44:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:53:49.575 119551484 00:53:49.575 05:44:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:53:49.575 834160614 00:53:49.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:49.575 05:44:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85328 00:53:49.575 05:44:36 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:53:49.575 05:44:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85328 /var/tmp/bperf.sock 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85328 ']' 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:49.575 05:44:36 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:49.576 05:44:36 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:49.576 05:44:36 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:49.576 05:44:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:53:49.576 [2024-12-09 05:44:36.302795] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:53:49.576 [2024-12-09 05:44:36.303100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85328 ] 00:53:49.576 [2024-12-09 05:44:36.451228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:49.576 [2024-12-09 05:44:36.479981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:49.576 05:44:36 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:49.576 05:44:36 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:53:49.576 05:44:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:53:49.576 05:44:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:53:49.834 05:44:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:53:49.834 05:44:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:53:50.093 [2024-12-09 05:44:37.040426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:50.352 05:44:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:53:50.352 05:44:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:53:50.611 [2024-12-09 05:44:37.335254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:50.611 nvme0n1 00:53:50.611 05:44:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:53:50.611 05:44:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:53:50.611 05:44:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:53:50.611 05:44:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:53:50.611 05:44:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:53:50.611 05:44:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:50.869 05:44:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:53:50.869 05:44:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:53:50.869 05:44:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:53:50.869 05:44:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:53:50.869 05:44:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:53:50.869 05:44:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:50.869 05:44:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@25 -- # sn=119551484 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 119551484 == \1\1\9\5\5\1\4\8\4 ]] 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 119551484 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:53:51.128 05:44:37 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:51.387 Running I/O for 1 seconds... 00:53:52.325 14001.00 IOPS, 54.69 MiB/s 00:53:52.325 Latency(us) 00:53:52.325 [2024-12-09T05:44:39.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:52.325 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:53:52.325 nvme0n1 : 1.01 14029.83 54.80 0.00 0.00 9089.13 5689.72 16920.20 00:53:52.325 [2024-12-09T05:44:39.297Z] =================================================================================================================== 00:53:52.325 [2024-12-09T05:44:39.297Z] Total : 14029.83 54.80 0.00 0.00 9089.13 5689.72 16920.20 00:53:52.325 { 00:53:52.325 "results": [ 00:53:52.325 { 00:53:52.325 "job": "nvme0n1", 00:53:52.325 "core_mask": "0x2", 00:53:52.325 "workload": "randread", 00:53:52.325 "status": "finished", 00:53:52.325 "queue_depth": 128, 00:53:52.325 "io_size": 4096, 00:53:52.325 "runtime": 1.00714, 00:53:52.325 "iops": 14029.827034970313, 00:53:52.325 "mibps": 54.80401185535278, 00:53:52.325 "io_failed": 0, 00:53:52.325 "io_timeout": 0, 00:53:52.325 "avg_latency_us": 9089.127039052952, 00:53:52.325 "min_latency_us": 5689.716363636364, 00:53:52.325 "max_latency_us": 16920.203636363636 00:53:52.325 } 00:53:52.325 ], 00:53:52.325 "core_count": 1 00:53:52.325 } 00:53:52.325 05:44:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:53:52.325 05:44:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:53:52.584 05:44:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:53:52.584 05:44:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:53:52.584 05:44:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:53:52.584 05:44:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:53:52.584 05:44:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:53:52.584 05:44:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:53:52.843 05:44:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:53:52.843 05:44:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:53:52.843 05:44:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:53:52.844 05:44:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:52.844 05:44:39 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:53:52.844 05:44:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:53:53.103 [2024-12-09 05:44:39.922316] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-09 05:44:39.922325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13845d0 (107): Transport endpoint is not connected 00:53:53.103 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:53:53.103 [2024-12-09 05:44:39.923317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13845d0 (9): Bad file descriptor 00:53:53.103 [2024-12-09 05:44:39.924315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:53:53.103 [2024-12-09 05:44:39.924331] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:53:53.103 [2024-12-09 05:44:39.924341] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:53:53.103 [2024-12-09 05:44:39.924357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:53:53.103 request: 00:53:53.103 { 00:53:53.103 "name": "nvme0", 00:53:53.103 "trtype": "tcp", 00:53:53.103 "traddr": "127.0.0.1", 00:53:53.103 "adrfam": "ipv4", 00:53:53.103 "trsvcid": "4420", 00:53:53.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:53.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:53:53.103 "prchk_reftag": false, 00:53:53.103 "prchk_guard": false, 00:53:53.103 "hdgst": false, 00:53:53.103 "ddgst": false, 00:53:53.103 "psk": ":spdk-test:key1", 00:53:53.103 "allow_unrecognized_csi": false, 00:53:53.103 "method": "bdev_nvme_attach_controller", 00:53:53.103 "req_id": 1 00:53:53.103 } 00:53:53.103 Got JSON-RPC error response 00:53:53.103 response: 00:53:53.103 { 00:53:53.103 "code": -5, 00:53:53.103 "message": "Input/output error" 00:53:53.103 } 00:53:53.103 05:44:39 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:53:53.103 05:44:39 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:53.103 05:44:39 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:53.103 05:44:39 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@33 -- # sn=119551484 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 119551484 00:53:53.103 1 links removed 00:53:53.103 05:44:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@33 -- # sn=834160614 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 834160614 00:53:53.104 1 links removed 00:53:53.104 05:44:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85328 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85328 ']' 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85328 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85328 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:53.104 killing process with pid 85328 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85328' 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@973 -- # kill 85328 00:53:53.104 Received shutdown signal, test time was about 1.000000 seconds 00:53:53.104 00:53:53.104 Latency(us) 00:53:53.104 [2024-12-09T05:44:40.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:53.104 [2024-12-09T05:44:40.076Z] =================================================================================================================== 00:53:53.104 [2024-12-09T05:44:40.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:53.104 05:44:39 keyring_linux -- common/autotest_common.sh@978 -- # wait 85328 00:53:53.363 05:44:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85310 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85310 ']' 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85310 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85310 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:53.363 killing process with pid 85310 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85310' 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@973 -- # kill 85310 00:53:53.363 05:44:40 keyring_linux -- common/autotest_common.sh@978 -- # wait 85310 00:53:53.622 00:53:53.622 real 0m5.476s 00:53:53.622 user 0m10.796s 00:53:53.622 sys 0m1.314s 00:53:53.622 05:44:40 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:53.622 05:44:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:53:53.622 ************************************ 00:53:53.622 END TEST keyring_linux 00:53:53.622 ************************************ 00:53:53.622 05:44:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:53:53.622 05:44:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:53:53.622 05:44:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:53:53.622 05:44:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:53:53.622 05:44:40 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:53:53.622 05:44:40 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:53:53.622 05:44:40 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:53:53.622 05:44:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:53.622 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:53:53.622 05:44:40 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:53:53.622 05:44:40 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:53:53.622 05:44:40 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:53:53.622 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:53:55.525 INFO: APP EXITING 00:53:55.525 INFO: killing all VMs 00:53:55.525 INFO: killing vhost app 00:53:55.525 INFO: EXIT DONE 00:53:56.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:56.092 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:53:56.092 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:53:57.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:57.028 Cleaning 00:53:57.028 Removing: /var/run/dpdk/spdk0/config 00:53:57.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:53:57.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:53:57.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:53:57.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:53:57.028 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:53:57.028 Removing: /var/run/dpdk/spdk0/hugepage_info 00:53:57.028 Removing: /var/run/dpdk/spdk1/config 00:53:57.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:53:57.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:53:57.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:53:57.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:53:57.028 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:53:57.028 Removing: /var/run/dpdk/spdk1/hugepage_info 00:53:57.028 Removing: /var/run/dpdk/spdk2/config 00:53:57.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:53:57.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:53:57.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:53:57.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:53:57.028 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:53:57.028 Removing: /var/run/dpdk/spdk2/hugepage_info 00:53:57.028 Removing: /var/run/dpdk/spdk3/config 00:53:57.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:53:57.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:53:57.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:53:57.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:53:57.028 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:53:57.028 Removing: /var/run/dpdk/spdk3/hugepage_info 00:53:57.028 Removing: /var/run/dpdk/spdk4/config 00:53:57.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:53:57.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:53:57.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:53:57.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:53:57.028 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:53:57.028 Removing: /var/run/dpdk/spdk4/hugepage_info 00:53:57.028 Removing: /dev/shm/nvmf_trace.0 00:53:57.028 Removing: /dev/shm/spdk_tgt_trace.pid56716 00:53:57.028 Removing: /var/run/dpdk/spdk0 00:53:57.028 Removing: /var/run/dpdk/spdk1 00:53:57.028 Removing: /var/run/dpdk/spdk2 00:53:57.028 Removing: /var/run/dpdk/spdk3 00:53:57.028 Removing: /var/run/dpdk/spdk4 00:53:57.028 Removing: /var/run/dpdk/spdk_pid56563 00:53:57.028 Removing: /var/run/dpdk/spdk_pid56716 00:53:57.028 Removing: /var/run/dpdk/spdk_pid56909 00:53:57.028 Removing: /var/run/dpdk/spdk_pid56996 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57016 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57120 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57130 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57264 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57470 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57620 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57697 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57774 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57860 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57945 00:53:57.028 Removing: /var/run/dpdk/spdk_pid57978 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58013 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58083 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58174 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58608 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58653 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58696 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58699 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58766 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58775 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58836 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58850 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58895 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58906 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58946 00:53:57.028 Removing: /var/run/dpdk/spdk_pid58957 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59087 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59123 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59205 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59539 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59551 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59588 00:53:57.028 Removing: /var/run/dpdk/spdk_pid59601 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59617 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59636 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59649 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59665 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59684 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59703 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59717 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59737 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59751 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59766 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59785 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59799 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59814 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59839 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59847 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59868 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59893 00:53:57.287 Removing: /var/run/dpdk/spdk_pid59912 00:53:57.288 Removing: /var/run/dpdk/spdk_pid59940 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60008 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60042 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60046 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60080 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60084 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60097 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60134 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60153 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60176 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60191 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60195 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60205 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60214 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60218 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60233 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60237 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60271 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60292 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60307 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60330 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60345 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60347 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60395 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60401 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60433 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60441 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60448 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60456 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60463 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60465 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60478 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60480 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60562 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60614 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60726 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60760 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60805 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60819 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60836 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60856 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60887 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60903 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60981 00:53:57.288 Removing: /var/run/dpdk/spdk_pid60997 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61041 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61111 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61167 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61195 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61290 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61331 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61365 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61592 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61689 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61718 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61747 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61781 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61814 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61848 00:53:57.288 Removing: /var/run/dpdk/spdk_pid61879 00:53:57.288 Removing: /var/run/dpdk/spdk_pid62269 00:53:57.288 Removing: /var/run/dpdk/spdk_pid62303 00:53:57.288 Removing: /var/run/dpdk/spdk_pid62648 00:53:57.288 Removing: /var/run/dpdk/spdk_pid63112 00:53:57.288 Removing: /var/run/dpdk/spdk_pid63380 00:53:57.547 Removing: /var/run/dpdk/spdk_pid64219 00:53:57.547 Removing: /var/run/dpdk/spdk_pid65139 00:53:57.547 Removing: /var/run/dpdk/spdk_pid65256 00:53:57.547 Removing: /var/run/dpdk/spdk_pid65318 00:53:57.547 Removing: /var/run/dpdk/spdk_pid66729 00:53:57.547 Removing: /var/run/dpdk/spdk_pid67031 00:53:57.547 Removing: /var/run/dpdk/spdk_pid70771 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71126 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71237 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71376 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71392 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71421 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71455 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71549 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71685 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71830 00:53:57.547 Removing: /var/run/dpdk/spdk_pid71912 00:53:57.547 Removing: /var/run/dpdk/spdk_pid72093 00:53:57.547 Removing: /var/run/dpdk/spdk_pid72163 00:53:57.547 Removing: /var/run/dpdk/spdk_pid72247 00:53:57.547 Removing: /var/run/dpdk/spdk_pid72596 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73011 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73012 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73013 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73263 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73533 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73906 00:53:57.547 Removing: /var/run/dpdk/spdk_pid73919 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74237 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74258 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74272 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74303 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74315 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74654 00:53:57.547 Removing: /var/run/dpdk/spdk_pid74707 00:53:57.547 Removing: /var/run/dpdk/spdk_pid75035 00:53:57.547 Removing: /var/run/dpdk/spdk_pid75225 00:53:57.547 Removing: /var/run/dpdk/spdk_pid75647 00:53:57.547 Removing: /var/run/dpdk/spdk_pid76180 00:53:57.547 Removing: /var/run/dpdk/spdk_pid77038 00:53:57.547 Removing: /var/run/dpdk/spdk_pid77678 00:53:57.547 Removing: /var/run/dpdk/spdk_pid77680 00:53:57.547 Removing: /var/run/dpdk/spdk_pid79678 00:53:57.547 Removing: /var/run/dpdk/spdk_pid79725 00:53:57.547 Removing: /var/run/dpdk/spdk_pid79778 00:53:57.548 Removing: /var/run/dpdk/spdk_pid79826 00:53:57.548 Removing: /var/run/dpdk/spdk_pid79926 00:53:57.548 Removing: /var/run/dpdk/spdk_pid79979 00:53:57.548 Removing: /var/run/dpdk/spdk_pid80039 00:53:57.548 Removing: /var/run/dpdk/spdk_pid80086 00:53:57.548 Removing: /var/run/dpdk/spdk_pid80450 00:53:57.548 Removing: /var/run/dpdk/spdk_pid81658 00:53:57.548 Removing: /var/run/dpdk/spdk_pid81798 00:53:57.548 Removing: /var/run/dpdk/spdk_pid82033 00:53:57.548 Removing: /var/run/dpdk/spdk_pid82615 00:53:57.548 Removing: /var/run/dpdk/spdk_pid82775 00:53:57.548 Removing: /var/run/dpdk/spdk_pid82932 00:53:57.548 Removing: /var/run/dpdk/spdk_pid83029 00:53:57.548 Removing: /var/run/dpdk/spdk_pid83260 00:53:57.548 Removing: /var/run/dpdk/spdk_pid83370 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84073 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84108 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84149 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84400 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84435 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84469 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84940 00:53:57.548 Removing: /var/run/dpdk/spdk_pid84945 00:53:57.548 Removing: /var/run/dpdk/spdk_pid85188 00:53:57.548 Removing: /var/run/dpdk/spdk_pid85310 00:53:57.548 Removing: /var/run/dpdk/spdk_pid85328 00:53:57.548 Clean 00:53:57.807 05:44:44 -- common/autotest_common.sh@1453 -- # return 0 00:53:57.807 05:44:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:53:57.807 05:44:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:57.807 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:53:57.807 05:44:44 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:53:57.807 05:44:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:57.807 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:53:57.807 05:44:44 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:57.807 05:44:44 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:53:57.807 05:44:44 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:53:57.807 05:44:44 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:53:57.807 05:44:44 -- spdk/autotest.sh@398 -- # hostname 00:53:57.807 05:44:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:53:58.066 geninfo: WARNING: invalid characters removed from testname! 00:54:20.007 05:45:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:23.299 05:45:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:25.833 05:45:12 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:28.365 05:45:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:30.271 05:45:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:32.809 05:45:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:35.346 05:45:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:54:35.346 05:45:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:54:35.346 05:45:21 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:54:35.346 05:45:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:54:35.346 05:45:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:54:35.346 05:45:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:54:35.346 + [[ -n 5259 ]] 00:54:35.346 + sudo kill 5259 00:54:35.357 [Pipeline] } 00:54:35.373 [Pipeline] // timeout 00:54:35.380 [Pipeline] } 00:54:35.395 [Pipeline] // stage 00:54:35.400 [Pipeline] } 00:54:35.415 [Pipeline] // catchError 00:54:35.424 [Pipeline] stage 00:54:35.427 [Pipeline] { (Stop VM) 00:54:35.440 [Pipeline] sh 00:54:35.722 + vagrant halt 00:54:39.007 ==> default: Halting domain... 00:54:45.644 [Pipeline] sh 00:54:45.923 + vagrant destroy -f 00:54:48.494 ==> default: Removing domain... 00:54:48.763 [Pipeline] sh 00:54:49.042 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:54:49.050 [Pipeline] } 00:54:49.063 [Pipeline] // stage 00:54:49.068 [Pipeline] } 00:54:49.081 [Pipeline] // dir 00:54:49.086 [Pipeline] } 00:54:49.099 [Pipeline] // wrap 00:54:49.105 [Pipeline] } 00:54:49.117 [Pipeline] // catchError 00:54:49.125 [Pipeline] stage 00:54:49.127 [Pipeline] { (Epilogue) 00:54:49.139 [Pipeline] sh 00:54:49.419 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:54:54.703 [Pipeline] catchError 00:54:54.705 [Pipeline] { 00:54:54.718 [Pipeline] sh 00:54:55.002 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:54:55.002 Artifacts sizes are good 00:54:55.011 [Pipeline] } 00:54:55.029 [Pipeline] // catchError 00:54:55.043 [Pipeline] archiveArtifacts 00:54:55.052 Archiving artifacts 00:54:55.182 [Pipeline] cleanWs 00:54:55.194 [WS-CLEANUP] Deleting project workspace... 00:54:55.194 [WS-CLEANUP] Deferred wipeout is used... 00:54:55.201 [WS-CLEANUP] done 00:54:55.203 [Pipeline] } 00:54:55.224 [Pipeline] // stage 00:54:55.229 [Pipeline] } 00:54:55.246 [Pipeline] // node 00:54:55.253 [Pipeline] End of Pipeline 00:54:55.291 Finished: SUCCESS